Someday in the not-distant future, artists, designers, musicians, and authors will likely rely on artificial intelligence providers to enhance their craft. But right now, the opposite is true. Businesses building AI tools depend on a wide array of creative output to train their models. And that’s creating a host of questions about who might be willing to provide their work as training data, and how they’ll get compensated for it.
Companies like Adobe, graphic design company Canva, and generative AI startup Stability AI are coming up with incentive plans for people who contribute artwork to the AI training models. That’s partly to get ahead of legal challenges—numerous lawsuits have been filed against companies using copyrighted materials to train their models. But long after the legal questions are settled, ongoing access to high-quality data will remain essential for developers of AI tools that generate new content, whether as audio, video, or text.
Generative AI technology sounds complicated, but it mainly comes down to three things: an AI model, computing resources, and data. The salary demands of people who provide the first two ingredients “are reasonably well accommodated—like researchers or engineers,” said Jun-Yan Zhu, an assistant professor at Carnegie Mellon University’s Robotics Institute and head of the Generative Intelligence Lab.
But the tech industry is only just beginning to acknowledge the value of the data that creators provide with their content.
Adobe and Canva already pay contributors who upload content—such as photographs, video clips, vectors, and illustrations—to their stock image programs, where they earn royalties whenever their creations are used.
Adobe, which owns apps like Photoshop and Illustrator and has recently launched a set of generative AI tools called Firefly, is providing a separate bonus for creators whose work is specifically used to train AI models in order to ensure the supply of high-quality content for training purposes. “We don’t want to flood the system with just any content—it has to be good content and high-value content,” said Matthew Smith, vice president of Adobe Stock.
The AI training bonus is paid out yearly and is weighted based on the number of licenses the content has generated in the last 12 months, which Smith says is a good proxy for the demand and usefulness of a given image. The bonus is also based on the total number of approved images the creator has submitted to Adobe Stock. The training bonus, which for now applies only to images, vectors, and illustrations, ultimately is at Adobe’s discretion.
The system Adobe has set up appears to be working. In the last six months, the company has seen licenses and contributor payouts climb to all-time highs, according to Smith. (That includes both the stock program and contributors who remain opted in to have their work used for AI training models.) The first AI-related bonus payouts were initiated in September. Adobe declined to disclose the total amount set aside for the AI training-related payments, or an average figure for the bonuses.
Meanwhile, Canva in October established a $200 million fund for creators who contribute to the graphic design platform’s stock program and allow their content to be used for AI training. While royalties for the regular stock program do consider the type of medium such as visual or audio, the current model for AI compensation does not. Payments are based on factors including how much total content the user has contributed to Canva, and how often each piece is used, as well as the complexity of the illustration or template provided.
“There’s a bit of an art and science to calculating it,” said Cameron Adams, co-founder of Canva.
Both Adobe and Canva said their payment models could change over time.
Payment plans are also emerging for other modes such as text-to-audio tools. Stability AI, which was sued by artists arguing their work was used without being compensated for it, launched an AI tool in September that allows users to create audio and sound effects, including samples that users can then include in their own work. The company has partnered with Audiosparx, a stock audio company that has been around since 1996 and has relationships with musicians, to set up an opt-in revenue-sharing model.
The revenue sharing echoes what some musicians, like Grimes, had proposed, which is that artists license their work and get a 50-50 split on royalties for songs featuring their voices.
“We wanted to experiment with this kind of new model,” said Ed Newton-Rex, head of Stability AI’s audio product. “You want to innovate in a bunch of different ways. And this field is obviously moving fast.”
OpenAI, the company that led the emergence of the generative AI industry, is a notable exception when it comes to AI-training compensation. It doesn’t pay creators. But the company, which owns both ChatGPT and Dall-E, said it offers an opt-out program for artists who don’t want their work to be trained on by future generations of OpenAI’s text-to-image models and that Dall-E will reject requests that ask for an image in the style of living artists.
As more generative AI tools get released to the public, the tech industry will likely face a growing chorus of lawsuits from artists, designers, and other creative professionals taking a strong stance against having their work used for AI-related purposes.
The reality is that billions of data points go into training AI models, so knowing how much of an original work ends up in a piece of AI-generated content is not always possible or easy to measure.
And yet, “if you come out as being artist-friendly, and [say] we’re going to compensate them, that really takes [an AI tool provider] out of the line of fire for one of these class actions,” said Katie Gardner, a partner at Gunderson Dettmer, a law firm that focuses on venture-backed companies.
There’s another important reason for paying creators to upload their work onto AI platforms: It encourages more people to participate in the generative AI industry, CMU’s Zhu said.
While AI can help creators brainstorm or add different features to their work, AI tools are undoubtedly starting to automate part of the creation process for artists and designers, threatening at least a portion of their economic opportunity. “If we are undermining creative activities that these professions engage in, [and are] unable to pay enough for them to make a living, then we’re going to see less of that creative activity,” said Jeongki Kim, an assistant professor of strategic design and management at Parsons School of Design.
Compensating creators is one of the biggest issues facing the generative AI industry, but it is likely an issue that stays within the companies that are building large language models (LLMs) or big commercial product companies like Adobe or Shutterstock, Gardner said. Most generative AI companies are building applications on third-party LLMs and not training their own models from scratch.