Stability AI, the company behind the popular text-to-image AI program Stable Diffusion, has raised new funding that values the company at about $1 billion (according to a report of Bloomberg quoting a “person familiar with the matter”). It’s an important validation of the company’s approach to AI development, which, unlike established companies like OpenAI and Google, focuses on open source models that anyone can use unsupervised.
In a press statementStability AI said it has raised $101 million in a round led by Coatue, Lightspeed Venture Partners and O’Shaughnessy Ventures, and that it will use the funds to “help develop open AI models for image, language, audio, video, 3D, and more, for use cases for consumers and enterprises worldwide.”
Anyone can build on Stability AI’s code — or use it in moderation
Stable Diffusion is one of the leading examples of text-to-image AI, including models such as OpenAI’s DALL-E, Google’s Imagen, and Midjourney. However, Stability AI has differentiated its products by making its software open-source. That means anyone can build on the company’s code or even use it to drive their own commercial offerings.
Stability AI offers its own commercial version of the model, called DreamStudio, and says it plans to monetize it by developing this underlying infrastructure and customizing versions of the software for enterprise customers. The company is based in London and has approximately 100 employees worldwide. It says it plans to expand to about 300 employees over the next year. The company also makes open source versions of other major AI models, including a text generator system very similar to OpenAI’s GPT-3.
Coatue’s investor Sri Viswanath (who will join Stability AI’s board as part of the deal) said it was this open-source approach that set Stability AI apart from its rivals. “AI’s commitment to open source is critical – by giving the wider public the tools to create and innovate, open source will activate the momentum behind AI’s capabilities,” Viswanath said. Bloomberg.
However, the open-source nature of Stability AI’s software means it’s also easy for users to create potentially harmful images — from consensual nudes to propaganda and misinformation. Other developers, such as OpenAI, have taken a much more cautious approach to this technology, integrating filters and monitoring how individuals use the product. Stability The ideology of AI is much more libertarian by comparison.
“Ultimately, it is people’s responsibility to determine whether they are ethical, moral and legal in the way they use this technology,” company founder Emad Mostaque said. The edge in Sept. “The bad things people make with it […] I think it will be a very, very small percentage of total usage.”
In addition to malicious applications, there are open questions about the legal issues inherent in text-to-image models. All of these systems are trained on data scraped from the web, including copyrighted content; from artist blogs and websites to images from stock photography sites. Some individuals whose work has been used to train these systems without their permission have said they are interested in legal action or compensation. These problems are likely to become even more acute as companies like Stability AI prove their ability to turn the work of others into their own profit.