Cutting Through the Hype Cycle: How to Ignore AI Hyperbole on Either Side and Get Straight to Productivity
In November
2022, Generative AI burst onto the scene with a debut unlike any other in tech.
It gained 100 million registered
users in less than two months, making it the fastest adoption of a consumer
product in human history. Today, ChatGPT has about 100
millon weekly users.
Its rise to
widespread user adoption promised a new era of technological democratization.
Investors enthusiastically injected billions into startups, each poised to ride
the wave of innovation with different AI services, from talking to your PDF files
to automating the creation of presentations.
Generative
AI's debut on main street sent one clear message: cutting-edge technology was
no longer the sole domain of high-end research labs or multi-billion-dollar
corporations. It symbolized empowerment of the individual. The top-tier Large
Language Models (LLMs) you and I and every individual worldwide with an
internet connection suddenly had access were the same tools available to the CIA,
NSA or any high-level government agency.
People use it
for anything from writing entertaining poems to conducting complex comparative
analysis across large bodies of texts as part of their work. And since that watershed
moment in November 2022, no week has gone by without frantic announcements of
new models, tools and capabilities that a new breed of AI influencers introduces
with a hyperbolic “this changes everything!”
Yet, as the
hype crescendo increased towards the peak of the Gartner
Hype cycle, so did predictably the critical voices who saw in Generative AI
little more but an elaborate scam to lure billions out of venture fund
capitalists on the hunt after the next big thing. Led by AI pundits like cognitive
scientist Gary Marcus, the debate on the other side of the hype spectrum was
focused on the limitations of current AI, the fact that they are not in any way
intelligent, that hallucinations make LLMs inherently unreliable and may never
be solved, and therefore make LLMs of little use for business applications. Critics
they see themselves vindicated by a growing realization within organizations that
integrating these models into mainstream business applications and policies is
a complex challenge and more difficult than initially thought. Which leads critics
like Gary Marcus to proclaim that LLMs have hit a natural ceiling and represent
a dead end in the history of AI development. Are LLMs merely fancy playthings
for hobbyists rather than tools for enterprise?
If we take
a step back and look at the history of technology innovation, it's clear that both
reactions – the hype and the resulting anti-hype backlash – are co-dependent overreactions.
They follow the ever so predictable stages of the hype cycle which every
emerging technology necessarily has to go through. And because this cycle is
not actually about the technologies themselves, but about mass psychology,
technological expertise is of limited value in navigating it. No matter our
individual knowledge, experience and expertise, when the stampede begins, our
instincts set in and we will run with it. And the direction if largely decided
by the crowd we surround ourselves with.
However, if
we make the effort to step back and look at things from a bit of a distance, we
should be able to cut through the hype and anti-hype, and go right to the
plateau of productivity that these tools will eventually reach. It’s not easy,
but there are people who are able to do it.
A prime
example is Wharton college’s MBA professor Ethan Mollick, who started a Substack blog right around the time
of the launch of ChatGPT in which he explores from a very pragmatic perspective
what these tools can actually do, right now, and what that might tell us about
the implications for business practice. His recent New York Times best-selling
book “Co-Intelligence: Living and
Working with AI” which summarizes these experiences is called by users
“the most hands-on practical writing
about AI” which “should be required
reading for any organization that is rolling out AI tools”. By focusing on
the present capabilities of AI without getting distracted by exaggerated
failures or hypothetical successes, his pragmatic approach allows people to realistically
assess how AI can benefit each of us right now.
Applying LLMs
to tasks like developing business plans or generating creative writing is just
a preview of AI's ability to realize and augment human intent. Educators, students
and knowledge workers liaise with AI as mentors, sparring partners, analysts,
and ghostwriters, pushing the boundaries of what humans can accomplish with the
aid of pattern-based computational reasoning.
When critics
reflexively push back against the hype and the unreasonable expectations of
investors, and claim that “no one asked for this” and “it’s all just hype to enrich
Silicon Valley elites”, they miss the real-life use cases people are applying
AI to in knowledge work, right now. People are using Generative AI to
- Translate legal, medical or scientific jargon into language they understand.
- Quickly get the gist of a 40-60 page document without having to read it all.
- Learn about a topic they’ve never heard of and within 30 min get to a point where they can hold a conversation about it, thanks to dialogue with AI being a much more effective learning vehicle than a Google search.
- Conduct research across large quantities of complex documents, extract key information and apply it to the context of a particular question or problem they face.
- Run comparative analysis across large bodies of texts, to identify inconsistencies or differences in content.
- Apply any framework they’ve ever heard of (SWOT, RCA, VRIO, Six Sigma, DeBono Thinking Hats, Porter’s Five Forces, PESTLE, AIDA, etc.) and apply it to a new problem.
- Receive feedback on their writing from different perspectives, check it for cognitive or social biases, logical inconsistencies or insensitive language and receive suggestions for improvements based on their goals.
- Examine a text or situation from the perspective of any model or theory academia has ever developed and draw conclusions.
- Find the right words for messages that serves their purposes better than anything they could write themselves, such as turning a complaint into a communication with legal heft, turning angry rants into diplomatic rebuttals, or drafting any kind of bureaucratic messaging that no person takes joy in writing.
- Synthesize new materials (presentations, decision templates, briefing notes, concept notes, project proposals, etc.) based on unstructured sources (protocols, transcripts, personal notes, etc.).
- Develop structures and arcs for long-form contents, from policy documents and essays to learning programmes and seminars.
- Run infinite brainstorming session on any topic imaginable, and in the process new combinations of concepts and idea no human has ever thought of before.
And these
are just some of the examples for purely text-based knowledge work. Multi-model
AI that allows processing and generation of images and interaction via voice allows
people to
- Have conversations with people in different languages with real-time voice translation
- Take pictures or run a live video with an unknown object, building, animal, etc. and receive AI advice, research or tutoring on it
- Take pictures of any text (handwritten notes, street signs, workshop flip charts, ancient hieroglyphs) and get transcription, translation and analysis in one go
- Walk through a foreign city with AI in your ear as a tourist guide and let it lead you to and tell you everything about the sights.
- Transcribe hour-long conversations from interviews, advisory sessions, podcasts or media within minutes and have a conversation with the contents or turn them into protocols or summaries.
All these
are not hyped promises, but real-world use cases that people are experiencing
today, everyday. They are the reason why “the average business spent $1.5k on AI tools in Q1 2024, an increase of
138% year over year [which is] evidence that companies using AI are seeing
clear benefits and are doubling down”. The users engaging in the above
scenarios are the ones who have cut through the hype cycle, ignored the hype
threads of sleezy influencers on the one side and the angry rants of technology
sceptics on the other, and went straight to Gartner’s Plateau of Productivity. They
explored what the tools can do for them, and now are reaping the benefits.
Of course, this
long list of practical applications misses some nuances. Hallucinations will
likely always remain a factor with LLMs and require human discernment und
judgement (similar to anything we receive from human co-workers). The risk of
misinformation or AI-generated content polluting the web is real. There's a legitimate
conversation to be had about bias and the ethical use of AI in surveillance,
and whether our reliance on AI could atrophy certain human skills, much like
how reliant we've become on GPS or digital contact books. But while we can acknowledge
(and work on mitigating) the limits and risks of AI, we shouldn't overlook the
immense benefits that real people like you and me are already reaping in
applying AI.
Given the reality
of the use cases above, it is in my way view completely incoherent to maintain
a claim that Generative AI is “useless”, just a “grift” or a “scam”. And given
that with 100s of millions of monthly AI users, the AI Incident Database (a
directory that collects adverse incidents globally that occurred through the
use of AI) lists as of May 2024 a mere 389
incidents where definite harm has occurred based on the Common Safety
Engineering and Trustworthiness Evaluation (CSETv1) standard, it is equally
incongruent to claim to date Generative AI is a “net negative for society”.
As we
navigate this technological transformation, let's focus on the practical and
the tangible. Let's utilize what's at our disposal, take those tools for what
they are (well-read, stoic, fast-working interns without any actual
understanding of the world) and try out, whether we can use them for the knowledge
worker task that is on our desk today. Today’s LLMs will be the worst AI we
will ever work with, and getting proficient with them now will put us in the
best position for when better AI arrives tomorrow.
Comments