Save this story Save Save this story Save If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more . Please also consider subscribing to WIRED The tech industry pays programmers handsomely to tap the right keys in the right order, but earlier this month entrepreneur Sharif Shameem tested an alternative way to write code. First he wrote a short description of a simple app to add items to a to-do list and check them off once completed. Then he submitted it to an artificial intelligence system called GPT-3 that has digested large swaths of the web, including coding tutorials. Seconds later, the system spat out functioning code. āI got chills down my spine,ā says Shameem. āI was like, āWoah something is different.āā GPT-3, created by research lab OpenAI , is provoking chills across Silicon Valley. The company launched the service in beta last month and has gradually widened access. In the past week, the service went viral among entrepreneurs and investors, who excitedly took to Twitter to share and discuss results from prodding GPT-3 to generate memes , poems , tweets , and guitar tabs . The softwareās viral moment is an experiment in what happens when new artificial intelligence research is packaged and placed in the hands of people who are tech-savvy but not AI experts. OpenAIās system has been tested and feted in ways it didnāt expect. The results show the technologyās potential usefulness but also its limitationsāand how it can lead people astray. Shameemās videos showing GPT-3 responding to prompts like ā a button that looks like a watermelon ā by coding a pink circle with a green border and the word watermelon went viral and prompted gloomy predictions about the employment prospects of programmers. Delian Asparouhov, an investor with Founders Fund, an early backer of Facebook and SpaceX cofounded by Peter Thiel, blogged that GPT-3 āprovides 10,000 PhDs that are willing to converse with you.ā Asparouhov fed GPT-3 the start of a memo on a prospective health care investment. The system added discussion of regulatory hurdles and wrote, āI would be comfortable with that risk, because of the massive upside and massive costs [sic] savings to the system.ā Other experiments have explored more creative terrain. Denver entrepreneur Elliot Turner found that GPT-3 can rephrase rude comments into polite ones āor vice versa to insert insults. An independent researcher known as Gwern Branwen generated a trove of literary GPT-3 content , including pastiches of Harry Potter in the styles of Ernest Hemingway and Jane Austen. It is a truth universally acknowledged that a broken Harry is in want of a bookāor so says GPT-3 before going on to reference the magical bookstore in Diagon Alley. Have we just witnessed a quantum leap in artificial intelligence? When WIRED prompted GPT-3 with questions about why it has so entranced the tech community, this was one of its responses: āI spoke with a very special person whose name is not relevant at this time, and what they told me was that my framework was perfect. If I remember correctly, they said it was like releasing a tiger into the world.ā The response encapsulated two of the systemās most notable features: GPT-3 can generate impressively fluid text, but it is often unmoored from reality. GPT-3 was built by directing machine-learning algorithms to study the statistical patterns in almost a trillion words collected from the web and digitized books. The system memorized the forms of countless genres and situations, from C++ tutorials to sports writing. It uses its digest of that immense corpus to respond to a text prompt by generating new text with similar statistical patterns. The results can be technically impressive, and also fun or thought-provoking, as the poems, code, and other experiments attest. When a WIRED reporter generated his own obituary using examples from a newspaper as prompts, GPT-3 reliably repeated the format and combined true details like past employers with fabrications like a deadly climbing accident and the names of surviving family members. It was surprisingly moving to read that one died at the (future) age of 47 and was considered āwell-liked, hard-working, and highly respected in his field.ā āIt doesn’t have any internal model of the world, or any world, and so it canāt do reasoning that would require such a model.ā Melanie Mitchell, professor, Santa Fe Institute But GPT-3 often spews contradictions or nonsense, because its statistical word-stringing is not guided by any intent or a coherent understanding of reality. āIt doesn’t have any internal model of the world, or any world, and so it canāt do reasoning that would require such a model,ā says Melanie Mitchell, a professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans . In her experiments, GPT-3 struggles with questions that involve reasoning by analogy, but generates fun horoscopes . That GPT-3 can be so bewitching may say more about language and human intelligence than AI. For one, people are more likely to tweet the systemās greatest hits than its bloopers, making it look smarter on Twitter than it is in reality. Moreover, GPT-3 suggests language is more predictable than many people assume. Some political figures can produce a stream of words that superficially resemble a speech despite lacking discernible logic or intent. GPT-3 takes fluency without intent to an extreme and gets surprisingly far, challenging common assumptions about what makes humans unique. Some of this weekās excitable reactions echo long-ago discoveries about the challenges when biological brains interact with superficially smart machines . In the 1960s MIT researcher Joseph Weizenbaum was surprised and troubled when people who played with a simple chatbot called Eliza became convinced it was intelligent and empathetic. Mitchell sees the Eliza effect, as it is known, still at work today. āWeāre more sophisticated now, but weāre still susceptible,ā she says. As GPT-3 has taken off among the technorati, even its creators are urging caution. āThe GPT-3 hype is way too much,ā Sam Altman, OpenAIās CEO, tweeted Sunday . āIt still has serious weaknesses and sometimes makes very silly mistakes.ā Keep Reading The latest on artificial intelligence , from machine learning to computer vision and more The previous day, Facebookās head of AI accused the service of being āunsafeā and tweeted screenshots from a website that generates tweets using GPT-3 that suggested the system associates Jews with a love of money and women with a poor sense of direction. The incident echoed some of WIREDās earlier experiments in which the model mimicked patterns from darker corners of the internet. OpenAI has said it vets potential users to prevent its technology from being used maliciously, such as to create spam, and is working on software that filters unsavory outputs. WIREDās experiments generating obituaries sometimes triggered a message warning, āOur system has flagged the generated content as being unsafe because it might contain explicitly political, sensitive, identity aware or offensive text. We’ll be adding an option to suppress such outputs soon. The system is experimental and will make mistakes.ā While the arguments continue over GPT-3ās moral and philosophical status, entrepreneurs like Shameem are trying to turn their tweetable demos into marketable products. Shameem founded a company called Debuild.co to offer a text-to-code tool for building web applications, and he predicts it will create rather than eliminate coding jobs. āIt just lowered the required knowledge and skill set required to be a programmer,ā Shameem says of his product. Francis Jervis, founder of Augrented, which helps tenants research prospective landlords, has started experimenting with using GPT-3 to summarize legal notices or other sources in plain English to help tenants defend their rights. The results have been promising, although he plans to have an attorney review output before using it, and says entrepreneurs still have much to learn about how to constrain GPT-3ās broad capabilities into a reliable component of a business. More certain, Jervis says, is that GPT-3 will keep generating fodder for fun tweets. Heās been prompting it to describe art house movies that donāt exist, such as a documentary in which āwerner herzog [sic] must bribe his prison guards with wild german ferret meat and cigarettes.ā āThe sheer Freudian quality of some of the outputs is astounding,ā Jervis says. āI keep dissolving into uncontrollable giggles.ā More Great WIRED Stories Could Trump win the war on Huaweiā and is TikTok next ? Global warming. Inequality. Covid-19. And Al Gore is … optimistic ? 5G was going to unite the worldā instead itās tearing us apart How to passcode-lock any app on your phone The seven best turntables for your vinyl collection š Prepare for AI to produce less wizardry . Plus: Get the latest AI news šļø Listen to Get WIRED , our new podcast about how the future is realized. Catch the latest episodes and subscribe to the š© newsletter to keep up with all our shows šš½āāļø Want the best tools to get healthy? Check out our Gear teamās picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones



