7 min read
In order to get something done, maybe we need to think less.”
So began an innocuous post on the otherwise obscure blog Nothing But Words in July 2020 that managed to make headlines from MIT Technology Review to NBC. Why? Because while scores of humans debated its content, it turned out most of the words weren’t actually written by a human at all. They were penned by a new artificial intelligence model called GPT-3, and posted by Liam Porr, a student at UC Berkeley at the time. The way he saw it, GPT-3 is about to change the way we write, and this blog post became high-profile proof.
Many people agree — and that’s now raising some interesting business questions. When software can write almost as well as a human, how will that alter the way marketing copy is created, how brands communicate, and perhaps even how they interact with customers? Some entrepreneurs, like Dave Rogenmoser, founder of a marketing company called Jarvis, are already exploring that answer. His software, powered by GPT-3, writes just about anything for his clients — from emails to website content to full-length books. “GPT-3 gets you 80 percent there,” he says.
So what exactly is GPT-3? It’s the third iteration of an AI language model called generative pre-trained transformer (or GPT), which was created by OpenAI, an AI research lab whose founders include Elon Musk. GPT-3 was released in June 2020 after being trained on hundreds of billions of words from the internet and volumes of books.
The model is still in private beta, but over the past year, OpenAI has been carefully pipetting access to businesses and researchers so they can observe “how it’s used in the wild,” as Murati puts it, while building out standards and safety protocols. Among the 300 applications using GPT-3 at press time, entrepreneurs have put it to work pumping out marketing content, making sense of data to drive product development, and creating dialogue for gaming characters. And of course, some have started new businesses with it.
Typeform is one early adopter. It’s not having GPT-3 write anything, but it does use the AI in its latest product, VideoAsk, to figure out what people who interact with the tool need. VideoAsk creates customizable “human chatbots,” which have the functionality of a text-based chatbot (ask questions, get answers), but information is delivered via video snippets from a human who filmed him- or herself giving many different responses. GPT-3 is used to understand what customers say to the bot and send them to the most relevant next step. “It’s a really good match for what we do,” says founder David Okuniev.
GPT-3 isn’t good at everything, of course, and early users are quickly discovering its strengths and weaknesses.
Quizlet, an ed tech innovator with 250 employees and 60 million student users a month, was excited to explore GPT-3. It already uses a lot of machine learning to personalize study tools like digital flashcards, and it found that GPT-3 excelled at coming up with examples for how to use words in sentences for teaching vocabulary. “We fed it five or 10 examples, and it’s really good at learning from them to create new interesting ones,” says Ling Cheng, Quizlet’s director of data science. But, as is common among all machine learning tools, it wasn’t always great at being sensitive to Quizlet’s young audience. “We did find some of GTP-3’s sentences to be a little biased or offensive, so we used its content filter [which classifies text as safe, sensitive, or unsafe] and were surprised that it was pretty good,” says Cheng. “There’s a lot of research on reducing bias in these models, but it’s something we think about all the time.”
OpenAI is working on that, as well as on another common problem with machine learning models: Sometimes they just spit out gobbledygook. “Like, they might make up stuff,” Murati says. “You provide an input and they can generate something that’s not in touch with reality at all.”
She says OpenAI will continue selecting applicants to use GPT-3; pricing is based on usage. (Meanwhile, they have licensed it to Microsoft for its products and services.) And waiting in the wings? OpenAI has a new AI model named DALL-E that turns text input into images.
Meanwhile, Jarvis’s Rogenmoser says he just has to set expectations: GPT-3’s writing isn’t perfect, but it’s close enough to be a massive time-saver. Rogenmoser launched Jarvis (originally called ConversionAI) to automatically write Facebook ads for his clients, but then he saw them use it for almost every other form of marketing writing. He advises them to take GPT-3’s copy as a starting point — “and then you’re mixing and matching what you like and piecing it together, adding your own stuff, so you still have to know what looks good. But what you don’t have to do is stare at this blank screen anymore and start from scratch.”
Many people, it seems, are happy to not have a blank screen. In Jarvis’s first six months, it attracted 20,000 clients, mostly marketers and small businesses, and raised $6 million. When competitors started springing up, Rogenmoser acquired two of them.
He is now convinced that GPT-3 will be life-changing for entrepreneurs, revolutionizing writing the way the no-code movement changed engineering. “We see a ton of small businesses that are using it to build the website they’ve never built, email customers more often, start doing social media posts,” he says. “Maybe they don’t have the money to hire a marketer, but with this, they can get pretty close. It lowers the startup costs to really go out there and compete right out of the gate. Honestly, GPT-3 is going to mean a paradigm shift.”
GPT-3 didn’t write that last line — or any of this.