As 2022 winds to a close, the wonders of Generative AI is one of the biggest stories of the year. Our trek across AI's version of the uncanny valley continues apace.
As such, familiar questions continue to haunt the TrendzOwl (Hegel’s Owl of Minerva didn’t have it so easy either). To wit – how intelligent might AI become? How soon? And what will be the effects on the mechanics of customer interaction… and society at large?
The Wonder of It
At the end of August, in “Metaverse Rising,” I described the wonders of DALL-E 2, an app developed by OpenAI that turns text descriptions into hyper-realistic images. Such wonders make one wonder whether we’re prepared for potentially world-changing AI.
Well, a piece from The Information last month presses that very issue, going so far as to speculate on the end of the call center business as we know it:
Generative AI especially offers tantalizing possibilities. If this new stuff can write marketing copy, is it good enough to topple the world’s largest advertising agencies? It’s possible that image generators will soon remove the need for commercial photography. Maybe we will finally get a conversation engine that makes customer service less awful and renders the call center business obsolete. Each of these is a giant opportunity, and we’ve barely scratched the surface.
The author is not alone in pondering the potential impacts of exponential technologies on the world of customer interaction. MIT economist David Autor seems convinced that, “AI will reduce the number of person-to-person jobs in sales, food service, general customer service and tech support.”
And then just last week a new AI chatbot called “ChatGPT” was released for testing (made available to the general public through a free, easy-to-use web interface). With a human knack for abstract thinking, it has many suggesting that even Google may be on the verge of “total disruption”… and that countless jobs might soon be obsolete. According to Kevin Roose at The New York Times:
ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public…. ChatGPT feels different. Smarter. Weirder. More flexible…. It also appears to be ominously good at answering the types of open-ended analytical questions that frequently appear on school assignments. (Many educators have predicted that ChatGPT, and tools like it, will spell the end of homework and take-home exams.)
Roose seems convinced the impacts of this kind of exponential tech will be significant, even if he can’t be sure what those impacts will be:
The potential societal implications of ChatGPT are too big to fit into one column. Maybe this is, as some commenters have posited, the beginning of the end of all white-collar knowledge work, and a precursor to mass unemployment. Maybe it’s just a nifty tool that will be mostly used by students, Twitter jokesters and customer service departments until it’s usurped by something bigger and better.
Personally, I’m still trying to wrap my head around the fact that ChatGPT — a chatbot that some people think could make Google obsolete, and that is already being compared to the iPhone in terms of its potential impact on society — isn’t even OpenAI’s best AI. model. That would be GPT-4, the next incarnation of the company’s large language model, which is rumored to be coming out sometime next year.
Roose ends his piece quite certain that, “We are not ready.”
On the Other Hand…
Still, skeptics remain. What of Alexa, for example? As Azeem Azhar, author of “The Exponential Age,” pointed out recently, Alexa is on track to lose $10 billion this year, and the business is likely to be gutted:
Alexa was all the rage until it wasn’t. User activation and retention didn’t take hold…. on many occasions ‘15% to 25% of new Alexa users were no longer active in their second week with the device,’ and ‘most Alexa users in many years have used voice-powered devices only to play music, or set the timer while they cook, or turn on the lights.’…. Voice is still on the wrong side of the uncanny valley but it was pushed by the large tech cos. These firms FOMO’d brands to build services for these assistants. Billions were spent and we ‘meh’d’. I’ve gotta say, the similarities with metaverse hype are inescapable.
Putting aside Azhar’s swipe at the metaverse, it’s worth asking what’s going on. According to scientist and author Gary Marcus:
Bottom line: From the outset Large Language Models like GPT-3 have been great at generating surrealist prose, and they can beat a lot of benchmarks, but they are not (and may never be) great tech for reliably inferring user intent from what users say. Turning LLMs into a product that controls your home and talks to you in a way that would be reliable enough to use at scale in millions of homes is still a long, long way away.
Decision theory and AI researcher Eliezer Yudkowsky agrees. On December 7 of this week he tweeted that, “Pouring some cold water on the latest wave of AI hype: I could be wrong, but my guess is that we do *not* get AGI just by scaling ChatGPT, and that it takes *surprisingly* long from here….”
Excess Incentives for Automation
Whether or not AI passes the Turing Test within the next decade, it seems clear it’s going to shake-up the ways in which enterprises – including BPOs, of course – interact with consumers. Unpredictable types of disorientation could be far-reaching.
Even “augmented” rather than “artificial” intelligence is sure to disrupt the future, if in more positive directions for the job market. As Erik Brynjolfsson argues: “When AI is focused on augmenting humans rather than mimicking them, humans retain the power to insist on a share of the value created. What is more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely humanlike AI.”
And yet, as Brynjolfsson also adds: “While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.”
As a result, concerns continue to grow for the labor market over the next decade. Because as Thomas Edsall suggests, there is a real lack of momentum in the political community to wrestle with the possible implications of automation: “Worse yet,” he says, “the bitter divisions throughout our political system suggest that the development of this momentum will be a long time coming.”
Nobody seems to know exactly how things will go from here. Even Google CEO Sundar Pichai seems unable to discern what today’s innovations might mean by the time dusk descends on the AI question at some point in our unknowable future. As the Google chief famously said back in January, 2018, “AI is one of the most important things humanity is working on. It is more profound than… I dunno… electricity… or fire.”
See you in 2023.
Image credit: from michaelkleen.com
Comments