
Hyperbole – Early Definition in 17th-century English manuals as a trope used when “one speaks much more than is precisely true… yea, above all belief.” Thought to derive from Greek forms combining Hyper, for “beyond” or “over”; and Bole for “a throwing” or “a casting.” Ridiculously off target but long Javelin throwing in the Olympics is a cited example.
When I was a young cub in the trade press, the defining characteristic of the ‘promising’ reporter was someone who could adequately sense the presence of hyperbole, and write a sentence.
In the spirit of Diogenes and Hemingway, reporters are still trained to treat adjectives and adverbs as “red flags of exaggeration.” Words like ‘first’ and ‘new’ and ‘solution’ are verboten.
Hyperbole was the arching concern for me through many years in the computer trade press…
It was a fundamental element etched in stone in the tech world by Gartner Group and its Yearly Hype Cycle. Gallium Arsenide, Memristers, Bubble Memory, CCDs, VHDL, FDDI, Rapid Application Development, RFID Portals, Hadoop, RPA – most of these technologies went from high slope on the upside and then into the trough. And they only were to emerge with a few new twists and a new name late, back at the beginning of the Cycle.
But hyperbole took a new tenor, with the Y2K bug, with Cyber Security Doomsday and, then, stunningly, with ChatGPT. Unbridled, ChatGPT ran through the headlines in the last three years. But in the year 2025 , there was a shift in the public perception of this most promising and new and other adjectives technology.
Generative AI – which is what one would label ChatGPT – became an instrument of state dominance. It’s been framed as some special race to an undefined outcome on which the future of nations will be decided. Thus defined, it would brook no barriers. It was manifestly destined to consume power for the sake of its thirst for computation.
The ride was quick. It’s easy to forget now, but on release ChatGPT was briefly known as the “New Bing.” Not to world-shaking that. Before he found success at the top of Microsoft, Satya Nadella had led the company’s limp alternative to Google. With OpenAI’s ChatGPT in hand, he saw a game changer, tho the narrative was soon to get more vigorous.
“This new Bing will make Google come out and dance, and I want people to know that we made them dance,” Nadella said. It was a marketing ploy.
After this intelligent hula hoop was out and about gyrating, it would provide headlines for media for over three years. Nadella would set up a situation where Search Giant and sleeping AI powerhouse Google had to respond.
Hello, Pandora!
Google had never rested on its laurels, or failed to capitalize on its trove of user search experiences. It had long been on the leading edge of AI.
It created MapReduce, which set the stage for industry advances in distributed parallel processing. Google Brain forged new approaches to unsupervised learning, particularly regarding image recognition. Its TensorFlow Libraries for neurals paired with the invention of Transformer ‘attention’ techniques enabled neural nets to process entire sequences of data simultaneously. Meanwhile, its BERT models for understanding disclosed much of the intent behind users’ search queries.
What Google learned about how language worked continued to amaze. But it kept its most recent work under lock, presumably because it would cannibalize its healthy Search business. Oh, and it would open up a Pandora’s box of questions concerning the future of mankind.
It’s hard to guess if that was foreseen in Nadella’s tact to draw the Search giant from its lair. The generative AI engines’ remarkable performance conjured visions of Artificial General Intelligence – enough to furnish plots for a gazillion Sci-Fi pulps! The hyperbole race was on.
Say no to ceilings?
But in the wake of Congressional hearings, White House summits, and billion-dollar alliance announcements, the AI focus is finally shifting to the fundamental tasks required to make the advanced technology work.
Generative AI requires data of massive proportion yet automating that “data herding” is only in its earliest phases. This bottleneck makes it difficult for applications to move from prototype to operational use. In many ways, mere newness is all that distinguishes this era from the now-forgotten days of Hadoop and Big Data.
Even more telling over the last three years have been the pointed critiques of AI researcher and educator Gary Marcus. He has shared a laundry list of doubts about LLMs – as do some of its original inventors — and as time goes by, he is less and less alone in calling out generative AI’s seldom disclosed faults.
Besides its familiar hallucinating, the LLM engine’s reasoning capabilities prove narrow. Its interaction with users shows to be unsafe in numerous ways. In order to improve, it needs more and more data. It’s ability to perform is limited but its thirst for Capex is seemingly bottomless.
Marcus essentially argues that LLMs have a ceiling on their effectiveness but no floor for their costs.
Early 2025 saw a brief shift toward efficient, lightweight models like DeepSeek. However, a wave of billion-dollar energy and infrastructure deals quickly reaffirmed the industry’s commitment to the ‘bigger is better’ approach.
But Marcus’s arguments are reaching wider audiences, including Big Short” investor Steve Eisman. The latter’s skeptical tweets and broadcast appearances, coinciding with a big shift in Oracle’s AI debt profile spurred a pause the stock market’s AI optimism.
Doom for sale
It’s not easy to shift from hype that’s run-of-the mill marketing patter, and go to hype of global-doom scale. Or is it? These days, it just takes a few turns on the social media cranks.
On the winds of imagined urgency and inevitability, Generative AI grew into a geopolitical McGuffin – an all-purpose trigger for the plot — with billions of dollars behind it.
This is no small part due to the daydream mutterings of OpenAI CEO Sam Altman. With ChatGPT-3 freshly hatched, the reporters’ notebooks opened to scribble as Altman began to talk about Artificial General Intelligence (AGI).
Altman confidently moved up the due date on Ray Kurzweill’s Singularity. For a pop culture well-versed in sci-fi dystopias, this proved jarring.
Some urged caution. Even before ChatGPT-3, European regulators pursued such an approach. The specter of GenAI-enabled personalization and recommendation engines could threaten citizen’s privacy, after all. In 2025, that take met the crushing confidence of former tech venture capitalist and present US Vice President J.D. Vance.
In February, Vance pressed for innovation over safety, speaking at a European AI conference.
“The AI future will not be won by hand-wringing about safety,” he said, emphasizing that the US administration will move quickly in a contest perceived as a civilizational donnybrook.
The last days of 2025 saw Donald Trump’s executive order on AI policy, which provided more details on concern that China could win an AI race. The document suggests more and more monies will flow to the GenAI/LLM cause. Gary Marcus, unsurprisingly, had a different take.
In a Substack article – The Core Misconception That Is Driving American AI policy – Marcus wrote:
For all intents and purposes, the race is basically a tie – with both countries serving many of their own customers, using largely similar products. Coke and Pepsi are commodities; GenAI models are, too. Building our AI policy around a fantasy that we are somehow going to crush China in LLM war (or vice versa!) is misguided.
This tech tradepress vet agrees. A marketing juggernaut blithely appropriating the course of a promising tech innovation is a problem. It should not obscure a challenging road ahead – a road where effective advances are the goal. Meanwhile, the ease with which Silicon Valley inventions are hyped to global proportion bears increased monitoring. £
Note: Just to be fair, there is no Part 2. I was just having fun with the headline. Happy New Year 2026, all!
