top of page
Writer's pictureStephen Loynd

Tiger Burning Bright: GPT-3 Released into the World

Updated: May 8, 2023



I spoke with a very special person whose name is not relevant at this time,

and what they told me was that my framework was perfect. If I remember correctly, they said it was like releasing a tiger into the world.

– GPT-3, when questioned by WIRED magazine about why it has so entranced

the tech community



When I read the above quotation by the language generator AI technology known as GPT-3 last week, I was immediately struck by an eerie sensation: this new technology seemed to be making a sly reference to a poem published in 1794 by a prophetic visionary by the name of William Blake.


But first, some background on GPT-3.


GPT-3 (Generative Pre-training) is the first commercial software product of San Francisco-based OpenAI. Elon Musk and others founded OpenAI as a non-profit with a $1 billion pledge (Musk left the board in February, 2018, and now acts as an advisor and donor). By 2019, OpenAI became a for-profit research organization and raised $1 billion from Microsoft.


The new software (first described in a May, 2020, research paper) produces human-like text on demand and is the most powerful language-generation tool ever created. It succeeds last year’s GPT-2, which OpenAI decided not to release at the time because of the possibility it might generate inaccurate news and spam.


GPT-3’s new language model functions by digesting huge amounts of text from the web (including coding tutorials), analyzing which letters and words tend to follow one another and “learning” how to produce text. It is 100 times larger than GPT-2 and far more competent than its predecessor due to the number of parameters it is trained on (the values that a neural network tries to optimize during training): 175 billion for GPT-3 versus 1.5 billion for GPT-2. Potential applications include things like chatbot improvement, website design, and medical prescription. Early word is that OpenAI will offer businesses a paid-for subscription to GPT-3 via the cloud.


As MIT Technology Review notes, “Exactly what’s going on inside GPT-3 isn’t clear. But what it seems to be good at is synthesizing text it has found elsewhere on the Internet, making it a kind of vast, eclectic scrapbook created from millions and millions of snippets of text that it then glues together in weird and wonderful ways on demand.”


Trevor Callaghan, a former employee at rival AI lab DeepMind, wondered, “If you assume we get NLP (natural language processing) to a point where most people can’t tell the difference, the real question is what happens next?”


Ideas Feeding Ideas: New Breakthroughs


OpenAI is urging developers and engineers and entrepreneurs to play with the new software as much as they can. And what fascinates me most about language generating AI technology is the potential for further innovation resulting from such collaboration. All those bright minds around the world thinking of new applications for tools such as GPT-3 makes one wonder about future possibilities.


As WIRED put it: “GPT-3, created by research lab OpenAI, is provoking chills across Silicon Valley…. The software’s viral moment is an experiment in what happens when new artificial intelligence research is packaged and placed in the hands of people who are tech-savvy but not AI experts. OpenAI’s system has been tested and feted in ways it didn’t expect.”


It seems as if many diverse minds are on fire with the urge to innovate. Last week web developer and entrepreneur Sharif Shameem described how he tweaked the GPT-3 program to test an alternative way to write code. He wrote a short description of a simple app to add items to a to-do list and check them off once completed and then submitted it to GPT-3. In turn, GPT-3 produced HTML rather than natural language, and seconds later created functioning code, producing web-page layouts.


“I got chills down my spine,” Shameem said. “I was like, ‘Woah something is different.’”


Coder John Carmack, who pioneered 3D computer graphics in early video games like Doom and is now consulting CTO at Oculus VR, agreed with Shameem’s assessment: “The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver,” he said.


Still, OpenAI CEO Sam Altman was quick to take to Twitter to offer a more sober view: “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”


In other words, we’ve glimpsed a new kind of tiger that’s been released into the world, and we’re still trying to discern its very nature through a dense tech forest.


Where this new tiger is headed is anyone’s guess.


Questions Moving Forward


There are some very real challenges for GPT-3.


As CNBC notes: “It also churns out complete nonsense from time to time that’s hard to imagine any person saying.” Some developers add that GPT-3 occasionally responds to prompts by producing racist and sexist language. And WIRED says that, “GPT-3 can generate impressively fluid text, but it is often unmoored from reality.”


MIT Technology Review goes on:


It’s also no surprise that many have been quick to start talking about intelligence. But GPT-3’s human-like output and striking versatility are the results of excellent engineering, not genuine smarts. For one thing, the AI still makes ridiculous howlers that reveal a total lack of common sense. But even its successes have a lack of depth to them, reading more like cut-and-paste jobs than original compositions….


We have a low bar when it comes to spotting intelligence. If something looks smart, it’s easy to kid ourselves it is. The greatest trick AI ever pulled was convincing the world it exists. GPT-3 is a huge leap forward – but it is still a tool made by humans, with all the flaws and limitations that implies.


But there’s an even more confounding issue at play with programs such as GPT-3.


Deep neural networks are computationally expensive. As Neil Thompson, a research scientist at MIT explains in a new report, in order to sustain the kind of advances many aspire to, either cloud computing costs need to come down or the algorithms need to become more efficient. More advances with this kind of software will demand ever-increasing computing power, which is no easy task.


Put another way, simple hardware improvements will not be enough to overcome the dramatic rise in compute needed for cutting-edge advances in areas like real-time voice translation, language understanding, computer vision, and self-driving cars. According to Thompson, “There have been substantial improvements in algorithms, and of course lots of improvement in hardware, but despite that there's been this huge escalation in the amount of computing power.”


Ultimately, however, Thompson remains hopeful that improved approaches to deep learning will help consume less computational power. “Finding these new techniques won’t be easy,” he says, “but if we do find some broadly applicable ones, it will probably generate another wave of applications.”


Burning Bright


Clearly then, there are a set of very real and formidable challenges ahead for programs such as GPT-3. But if those obstacles can be overcome, who knows what kind of future tiger will be unleashed on our rapidly changing world?


Which brings us back to the visionary William Blake and perhaps his most famous poem, “The Tyger.” It’s one of many prophetic works by an artist who was largely unrecognized in his lifetime. Blake died in 1827, and today, “The Tyger” may be the most anthologized poem in the English language.


For me, its strange wink to the yet unleashed tiger known as GPT-3 is what really stirs the imagination:


Tyger Tyger, burning bright,

In the forests of the night;

What immortal hand or eye,

Could frame thy fearful symmetry?


In what distant deeps or skies.

Burnt the fire of thine eyes?

On what wings dare he aspire?

What the hand, dare seize the fire?


And what shoulder, and what art,

Could twist the sinews of thy heart?

And when thy heart began to beat,

What dread hand? and what dread feet?


What the hammer? what the chain,

In what furnace was thy brain?

What the anvil? what dread grasp,

Dare its deadly terrors clasp!


When the stars threw down their spears

And water'd heaven with their tears:

Did he smile his work to see?

Did he who made the Lamb make thee?


Tyger Tyger burning bright,

In the forests of the night:

What immortal hand or eye,

Dare frame thy fearful symmetry?


The poem’s second stanza asks the tyger where it was created. The third stanza wonders exactly how the tyger was formed. The fourth stanza ponders what kinds of tools could possibly have created such a magnificent creature. By the fifth stanza, the poet is wondering how the creator of the tyger ultimately reacted to the glories he’d wrought.


Today, technology’s skeptics feel that AI’s greatest trick was to convince the world it exists. But XPRIZE Foundation founder Peter Diamandis has emphasized that, “the rate at which technology is accelerating is itself accelerating.” So what happens if the likes of author Ray Kurzweil turn out to be right, and a computer is smart enough to take the “Turing Test” by 2029?


What if human-level intelligence is within AI’s grasp by the end of the decade?


Within what furnace will AI’s glorious symmetry, its perfect framework, have been forged? By what hands? On what wings will we all dare aspire, from the United States to China? With so many bright minds swinging their hammers of creation on the anvil of today’s sophisticated tech platforms, it’s impossible to predict what all this innovation portends, or what kind of world it will forge.


If something like human-level AI does come bounding out of our abundant tech forest someday, every realm of business – starting with the entire contact center ecosystem – had better be ready.


Image: from a sequential for the poem ‘The Tyger’ by artist Carolyn Perillo on cargocollective.com

91 views0 comments

Comments


bottom of page