We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
OpenAI, the San Francisco, California-based lab developing AI technologies including large language models, today announced the ability to create custom versions of GPT-3, a model that can generate human-like text and code. Developers can use fine-tuning to create GPT-3 models tailored to the specific content in their apps and services, leading to ostensibly higher-quality outputs across tasks and workloads, the company says.
“According to Gartner, 80% of technology products and services will be built by those who are not technology professionals by 2024. This trend is fueled by the accelerated AI adoption in the business community, which sometimes requires specifically tailored AI workloads,” an OpenAI spokesperson wrote in an email. “With a single line of code, customized GPT-3 allow developers and business teams to run and train powerful AI models based on specific datasets, eliminating the need to create and train their own AI systems from scratch, which can be quite costly and time-intensive.”
Customized GPT-3
Built by OpenAI, GPT-3 and its fine-tuned derivatives, like Codex, can be customized to handle applications that require a deep understanding of language, from converting natural language into software code to summarizing large amounts of text and generating answers to questions. GPT-3 has been publicly available since 2020 through the OpenAI API; as of March, OpenAI said that GPT-3 was being used in more than 300 different apps by “tens of thousands” of developers and producing 4.5 billion words per day.
The new GPT-3 fine-tuning capability enables customers to train GPT-3 to recognize a specific pattern for workloads like content generation, classification, and text summarization within the confines of a particular domain. For example, one customer, Keeper Tax, is using fine-tuned GPT-3 to interpret data from bank statements to help to find potentially tax-deductible expenses. The company continues to fine-tune GPT-3 with new data every week based on how their product has been performing in the real world, focusing on examples where the model fell below a certain performance threshold. Keeper Tax claims that the fine-tuning process is yielding about a 1% improvement week-over-week — which might not sound like a lot — but it’s compounding over time.
Event
Transform 2022
Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.
“[A thing that] we’ve been very mindful of and have been emphasizing during our development of this API is to make it accessible to developers who might not necessarily have a machine learning background,” OpenAI technical staff member Rachel Lim told VentureBeat in a phone interview. “How this manifests is that you can customize a GPT-3 model using one command line invocation. [W]e’re hoping that because of how accessible it is, we’re able to reach a more diverse set of users who can take their more diverse set of problems to technology.”
Lim asserts that the GPT-3 fine-tuning capability can also lead to cost savings, because customers can count on a higher frequency of higher-quality outputs from fine-tuned models compared with a vanilla GPT-3 model. (OpenAI charges for API access based on the number of tokens, or words, that the models generate.) While OpenAI levies a premium on fine-tuned models, Lim says that most fine-tuned models require shorter prompts containing fewer tokens — which can also result in savings.
Fine-tuning can also be advantageous, according to Lim, in that it can enable companies to keep custom GPT-3 models “fresher.” For example, Koko, a peer-support platform that provides crowdsourced cognitive therapy, was able to fine-tune a GPT-3 model to reflect a rising number of eating disorders during the pandemic.
In an internal experiment, OpenAI fine-tuned two sizes of GPT-3 on 8,000 examples from Grade School Math problems, a dataset the lab created containing problems at the grade school math level. OpenAI claims that the fine-tuned models more than doubled in accuracy when tested on questions from the same dataset, correctly answering questions like “Carla needs to dry-clean 80 pieces of laundry by noon. If she starts work at 8 a.m., how many pieces of laundry does she need to clean per hour?”
“[W]e’re continuously looking for ways to improve the user experience to make it easier for people to get good results that are robust enough to use in production-quality applications,” Lim said. “Fine-tuning is a way of aligning models to specific data more.”
Growth in usage
The launch of GPT-3 fine-tuning comes after OpenAI removed the waitlist for the GPT-3 API. Over the past year, the company claims it has developed endpoints for “more truthful” question-answering, provided a content filter to help mitigate toxicity, and implemented models — “instruct” models — that ostensibly adhere better to human instructions.
This summer, OpenAI partnered with Microsoft to launch the Azure OpenAI Service, an offering designed to give enterprises access to GPT-3 and its derivatives along with security, compliance, governance, and other business-focused features. Microsoft has a close relationship with OpenAI, having invested $1 billion in the company in 2020 and exclusively licensed GPT-3 to develop AI solutions for Azure customers.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.