[April 17, 2023] – Had the opportunity to speak with Forrester Analyst Ronan Curran recently for a VentureBeat article. Of course, the topic was ChatGPT, generative AI, and Large Language Models.
His counsel was both optimistic and cautionary – a good summation of the bearings IT decision makers should set as they begin yet another tango with a new technology meme.
A handy summarizer-paraphraser tells me that Curran told VentureBeat that it would be a mistake to underestimate the technology, although it is still difficult to critically examine the many potential use cases for generative AI.
Yes, such applies to each technical challenge – every day. And it bears repeating as each new technology whispers or yells that the fundamental rules no longer apply – and yet they do.
Looking back on my conversation with Curran, I find insight in what some would say is obvious. The large language models are … large! And, as Curran told me, because they are large, they cost a lot to compute and train. This reminds us, as others have, that the LLM should be viewed like polo or horse racing – as a game for the rich.
Why do we say game for the rich? On one level, the LLM era stacks up as megacloud builders’ battle, albeit with aspects of the playground grudge match. Microsoft leader Satya Nadella, who had the thankless task of competing with Google on the search front, almost seems to chortle: “This new Bing will make Google come out and dance, and I want people to know that we made them dance.”
For the cloud giants, the business already had aspects of a war of attrition, as they staked data center regions across the globe. The folks at Semianalysis.com have taken a hard stab at estimating a day in the life of an LLM bean counter, and they suggest a “model indicating that ChatGPT costs $694,444 per day to operate in compute hardware costs.” Of course, these are back of the envelope estimates – and the titans that host LLMs will look to engineer savings.
The new LLM morning summons to mind a technology that consumed much attention not so long ago: Big Data. The magic of Hadoop had a difficult time jumping from the likes of Google, Facebook and Netflix to the broader market. Maybe Big Data should have been named ‘Prodigious Data’ – because that would have offered fairer warning to organizations that had to gather such data, administer it, and come up with clever and profitable use cases.
“What is Big Data good for?” was a common question, even in its heyday. Eventually the answer was “machine learning”.
Much of Big Data remained in the realm of the prototype. In the end, it was a step forward for enterprise analytics. Successes and failures alike came under the banner of prototyping. Clearly, experimentation is where we are now with ChatGPT.
The more interesting future for more people may lie in outcomes with small language models, Forrester’s Curran told me. These will succeed or fail on a use case by use case basis.
As industry observer Benedict Evans writes in “ChatGPT and the Imagenet moment,” ChatGPT feels like a step-change forward in the evolution of machine learning. It falls something short of sentience. There is potential but there are plenty of questions to answer before its arc can be well gauged. [eof]
Read “Forrester: Question generative AI uses before experimentation” – VentureBeat Feb 24, 2023
https://venturebeat.com/ai/forrester-question-generative-ai-uses-before-experimentation/
Read “ChatGPT and the Imagenet moment” – ben-evans.com Dec 14, 2022
https://www.ben-evans.com/benedictevans/2022/12/14/ChatGPT-imagenet
The Inference Cost Of Search Disruption – Large Language Model Cost Analysis – Semianalysis.com Feb 9, 2023
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption