• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Brief exploration: Generative AI circa 2025

December 10, 2024 By Jack Vaughan

Complexities in deploying Gen AI and LLMs dim the light on some initial hype. These are the days of engineering, coordination, and integration.

By Jack Vaughan

The early procession of ‘2025 Outlooks’ seems to start with a lot of looks back. It’s mostly about Generative AI, which has proved to be a stock market mover, stock art maker, and stock item in years in review.

The song remains the same but the tenor may change. ChatGPT is two years old, and its market shaking meteoric rise now feels a bit less meteoric, if only because changing the world requires effort.

As always, Benedict Evans capably paints the picture. Investment has wildly risen, but it’s done so before Gen AI has become a proven market, he tells us in his yearly macro presentation. AI has been the topic of his yearly omnibus now for three years running. The year before that, his preoccupation, like most seers at the time, including yours truly, was the metaverse.

The questions he poses for the Generative AI platform somewhat center on practical challenges. But some of it this he describes as science – this work is experimental, and the limits on LLMs are still unknowns.

Says Evans: “We don’t know the answers. We’re still working out the questions.” Early questions include:

How far will this scale?

Will more data and compute lead to better and better results?

Will LLMs do everything?

How do we deploy this?

Practical challenges of deployment call for tactical successes that, if not achieved, effectively get the better of grand strategy. Wall Street is watching for them now, as are Chief Information Officers.

For example, it’s easy to sing the praise of revolutionizing AI multi-agents. These are encapsulated functions that perform tasks of information retrieval, summarization, and so on, perhaps linking up a corporation’s own data domain with the large model’s capabilities. Conceptually – not new. But, for the moment, the greatest thing since sliced bread.

Effective communication and coordination among agents remains a significant challenge – basically, an engineering challenge with some share of trial and error.

Many of the issues now being addressed have to do with applying Generative AI within existing workflows. This concern leads Evans to ask if the engineering of all this is widely doable, and if businesses’ results with LLMs will continue to improve.

As impressive as LLM demos have been, they do not make it easier to shoehorn their information processing, pattern finding, text generation and summation capabilities into everyday workflows. This is not the first technology that has had to go through such domestication. And there are plenty of efforts to optimize LLMs for easier, wider use.

That we are working out the questions, as Evans says, is worth re-stating. Still, the issues AI and LLMs present today go far beyond software engineering, to include local and national power grid dynamics, industrial-scale cooling, AI use in healthcare insurance decisions, and geopolitical strategy.

Yet, we look forward to measuring the course with a progressive gauge. And with that – a happy, healthy, prosperous new year to our readers. You are much appreciated at the home office! – Jack

 

Macro presentations of B. Evans – his site

Evans at SuperAI – YouTube

 

 

Filed Under: AI, Computing

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack