We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
This article was contributed by Abigail Hunter-Syed, Partner at LDV Capital.
Despite the hype, the “creator economy” is not new. It has existed for generations, primarily dealing with physical goods (pottery, jewelry, paintings, books, photos, videos, etc). Over the past two decades, it has become predominantly digital. The digitization of creation has sparked a massive shift in content creation where everyone and their mother are now creating, sharing, and participating online.
The vast majority of the content that is created and consumed on the internet is visual content. In our recent Insights report at LDV Capital, we found that by 2027, there will be at least 100 times more visual content in the world. The future creator economy will be powered by visual tech tools that will automate various aspects of content creation and remove the technical skill from digital creation. This article discusses the findings from our recent insights report.
We now live as much online as we do in person and as such, we are participating in and generating more content than ever before. Whether it is text, photos, videos, stories, movies, livestreams, video games, or anything else that is viewed on our screens, it is visual content.
Event
MetaBeat 2022
MetaBeat will bring together metaverse thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 3-4 in San Francisco, CA.
Currently, it takes time, often years, of prior training to produce a single piece of quality and contextually-relevant visual content. Typically, it has also required deep technical expertise in order to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm.
Computer vision will aid livestreaming
Livestreaming is a video that is recorded and broadcast in real-time over the internet and it is one of the fastest-growing segments in online video, projected to be a $150 billion industry by 2027. Over 60% of individuals aged 18 to 34 watch livestreaming content daily, making it one of the most popular forms of online content.
Gaming is the most prominent livestreaming content today but shopping, cooking, and events are growing quickly and will continue on that trajectory.
The most successful streamers today spend 50 to 60 hours a week livestreaming, and many more hours on production. Visual tech tools that leverage computer vision, sentiment analysis, overlay technology, and more will aid livestream automation. They will enable streamers’ feeds to be analyzed in real-time to add production elements that are improving quality and cutting back the time and technical skills required of streamers today.
Synthetic visual content will be ubiquitous
A lot of the visual content we view today is already computer-generated graphics (CGI), special effects (VFX), or altered by software (e.g., Photoshop). Whether it’s the army of the dead in Game of Thrones or a resized image of Kim Kardashian in a magazine, we see content everywhere that has been digitally designed and altered by human artists. Now, computers and artificial intelligence can generate images and videos of people, things, and places that never physically existed.
By 2027, we will view more photorealistic synthetic images and videos than ones that document a real person or place. Some experts in our report even project synthetic visual content will be nearly 95% of the content we view. Synthetic media uses generative adversarial networks (GANs) to write text, make photos, create game scenarios, and more using simple prompts from humans such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop.
In some circumstances, it will be faster, cheaper, and more inclusive to synthesize objects and people than to hire models, find locations and do a full photo or video shoot. Moreover, it will enable video to be programmable – as simple as making a slide deck.
Synthetic media that leverages GANs are also able to personalize content nearly instantly and, therefore, enable any video to speak directly to the viewer using their name or write a video game in real-time as a person plays. The gaming, marketing, and advertising industries are already experimenting with the first commercial applications of GANs and synthetic media.
Artificial intelligence will deliver motion capture to the masses
Animated video requires expertise as well as even more time and budget than content starring physical people. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated imagery (CGI), and visual effects (VFX). They will be an increasingly essential part of the content strategy for brands and businesses deployed across image, video and livestream channels as a mechanism for diversifying content.
The greatest hurdle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per workday. Motion capture (MoCap) is a tool often used by professional animators in film, TV, and gaming to record a physical pattern of an individual’s movements digitally for the purpose of animating them. An example would be something like recording Steph Curry’s jump shot for NBA2K
Advances in photogrammetry, deep learning, and artificial intelligence (AI) are enabling camera-based MoCap – with little to no suits, sensors, or hardware. Facial motion capture has already come a long way, as evidenced in some of the incredible photo and video filters out there. As capabilities advance to full body capture, it will make MoCap easier, faster, budget-friendly, and more widely accessible for animated visual content creation for video production, virtual character live streaming, gaming, and more.
Nearly all content will be gamified
Gaming is a massive industry set to hit nearly $236 billion globally by 2027. That will expand and grow as more and more content introduces gamification to encourage interactivity with the content. Gamification is applying typical elements of game playing such as point scoring, interactivity, and competition to encourage engagement.
Games with non-gamelike objectives and more diverse storylines are enabling gaming to appeal to wider audiences. With a growth in the number of players, diversity and hours spent playing online games will drive high demand for unique content.
AI and cloud infrastructure capabilities play a major role in aiding game developers to build tons of new content. GANs will gamify and personalize content, engaging more players and expanding interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces.
People will interact with many digital beings
We will have digital identities to produce, consume, and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently in different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile photos, memojis, avatars, gamertags, and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and fantastical to participate online.
Digital identities (or avatars) require visual tech. Some will enable public anonymity of the individual, some will be pseudonyms and others will be directly tied to physical identity. A growing number of them will be powered by AI.
These autonomous virtual beings will have personalities, feelings, problem-solving capabilities, and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, co-workers, doctors, dates and so much more.
Interacting with both people-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the rise of the Metaverse. The Metaverse could not exist without visual tech and visual content and I will elaborate on that in a future article.
Machine learning will curate, authenticate, and moderate content
For creators to continuously produce the volumes of content necessary to compete in the digital world, a variety of tools will be developed to automate the repackaging of content from long-form to short-form, from videos to blogs, or vice versa, social posts, and more. These systems will self-select content and format based on the performance of past publications using automated analytics from computer vision, image recognition, sentiment analysis, and machine learning. They will also inform the next generation of content to be created.
In order to then filter through the massive amount of content most effectively, autonomous curation bots powered by smart algorithms will sift through and present to us content personalized to our interests and aspirations. Eventually, we’ll see personalized synthetic video content replacing text-heavy newsletters, media, and emails.
Additionally, the plethora of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and management of deep fakes, fake news, and more. By 2027, most consumer phones will be able to authenticate content via applications.
It is deeply important to detect disturbing and dangerous content as well and is increasingly hard to do given the vast quantities of content published. AI and computer vision algorithms are necessary to automate this process by detecting hate speech, graphic pornography, and violent attacks because it is too difficult to do manually in real-time and not cost-effective. Multi-modal moderation that includes image recognition, as well as voice, text recognition, and more, will be required.
Visual content tools are the greatest opportunity in the creator economy
The next five years will see individual creators who leverage visual tech tools to create visual content rival professional production teams in the quality and quantity of the content they produce. The greatest business opportunities today in the Creator Economy are the visual tech platforms and tools that will enable those creators to focus on the content and not on the technical creation.
Abigail Hunter-Syed is a Partner at LDV Capital investing in people building businesses powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning, and AI to analyze visual data. She has more than a ten-year track record of leading strategy, ops, and investments in companies across four continents and rarely says no to soft-serve ice cream.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!