• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Jack Vaughan

The Last Word on AI This Time

June 24, 2025 By Jack Vaughan

When I first heard of Generative AI, I was skeptical. Although it was clearly a gigantic step forward for machine learning.  I covered the Hadoop/Big Data era – for five years. As noted before, we would ask what do we do with Big Data? The answer it turned out was Machine Learning. But it was complex, hard to develop, difficult to gather data for, and ROI was complicated or ephemeral. People would bemusedly ask if it had uses east of Oakland Bay. My experience with Big Data colored my perspective on Generative AI.

Generative AI requires great mountains of data to work. Herding that data is labor intensive. As with previous machine learning technologies, getting desired results from the model is difficult. Some engineer will come up with a tool to solve problems. And find some VC, and startups race about. Few apps get from prototype proof-of-concept to operational. Few pay their own way.

 

Benedict Evans, Geoffrey Hinton and Gary Marcus are just some of the people who more ably than I critiqued the LLM. But great excitement was unleashed on the global public. And there wasn’t much of an ear for their ‘wait a minute’ commentary.

 

But early this year, Deep Seek seemed to show that the rush to Generative AI – the rush of money for electricity and land to deploy widely – should be more carefully considered. Deep Seek was an event that arrived at a receptive moment.

 

It seems in a way a textbook case of technology disruption. That advocates were blind to the limits of scalability for LLMs, coming up with greater and greater kluges – think nuclear power – SMRs or other.

 

Meanwhile, a crew at a resource-strapped Chinese quant-tank saw ends around.  The designers focused on efficiency rather than brute force, employing methods such as reduced floating-point precision, optimized GPU instruction sets, and AI distillation.

Engineers love benchmarks – they love to tear them down! Benchmarks are biased, yes. Even a child can figure that. But … when you look at Deep Seek’s recipe, it is clever engineering. None of it is new. Others have worked on TinyML for years. The type of hardware optimizations they did were bread and butter years ago. There are plenty of computer scientists the are working to get around Generative AI’s issues [scale/cost/hallucination/use cases being the big ones]. These issues make this Silicon Valley baby a suspect redeemer. With respect, Jim Cramer sometimes oversteps his actual areas of expertise.

That Deep Seek moment – don’t let anyone tell you it is more or less than that – has just been followed by an upswing in the  contrarian view on LLMs.  A world that would have nothing of it a year ago is now seriously discussing “The Illusion of Thinking” – a paper by Apple researchers that questions the true reasoning capabilities of large language models.

“The Illusion of Thinking” This may put a pall on Agentic AI, which has conveniently arisen as the answer to Generative AI’s first task: to finagle its way into real world workflows. Now, as summer begins, there is more of an ear for voices that cite the challenges, obstacles, and over-sell that have marked the last 24 months or so of the AI era. That can be helpful in the big task of understanding and taming LLMs for greater usage.

Penultimately,  we have to hold in mind some contrary points going forward. It is not LLMs are not valuable, just that they have limits that hyperbole has obscured.

Inspiring to me was a recent post by network systems expert and author Bruce Davie. He reminds us that a rational middle path is often preferable to the extreme predictions of doom, bust, or boom that characterize today’s AI tempest. Humans can skew but the mean always calls, and we may be seeing that now. [Thanks to Davie for cueing me to New Yorker writer Joshua Rothman and, in turn F. Scott Fitzgerald, he of the adage of holding “two opposed ideas  the mind at the same time” seen above.]

This seems like a good time to let the Substack cogitate on these great matters. While I may post yet this season, I am kicking up my heels, and dreaming about slapping skeeters in Up North in Wisconsin. And taking “Lise Meitner: A Life in Physics” down from the shelf.

Source Code: Bill Gates’ Harvard Days

May 29, 2025 By Jack Vaughan

Gates Source Code BookWith the likes of Sam Altman and Elon Musk dashing about, we crouch for shelter now in an era where well-funded high-tech bros can live a life that was once reserved only for Doctor Strange.

That tends to make Bill Gates’ “Source Code: My Beginnings” (Knopf, 2025) a much more warmfy and life-affirming book than it might otherwise have been. In this recounting of his early days, and founding of Microsoft, he paints a colorful picture of a bright and excitable boy making good. Much of Source Code is set in “the green pastures of Harvard University.”

The boy wonder to be was born in Seattle in 1955, when computers were room sized, and totally unlike the consumer devices  which humans now ponder like prayer books as they walk city streets.

His family was comfortable and gave him a lot of room to engage a very curious imagination. His mother called it precociousness, and it’s  a trait he dampered down when he could. He had a fascination with basic analytical principles, which held him in stead when the age of personal computers dawned. [Read more…] about Source Code: Bill Gates’ Harvard Days

March of Analogies and AI neural networks

April 21, 2025 By Jack Vaughan

Hopfield circuit

Analogies provide us the tools to explore, discover, and understand invention, and  to communicate about the invention itself. Draft horses of old still stand fast as such an analogy.

In the 19th Century, James Watt estimated that a strong dray horse could lift 33,000 pounds by one foot in one minute. That provided a comparative measure for the steam engine, and it carries right through to the engines under the hoods of today’s F1 speedsters and NASCAR racers.

While a commonly accepted measure of AI performance is still developing for ChatGPT and other Generative AI systems, there is an analogy at work, and it lies in the neural firings of the brain.

Lift the hood on Generative AI and you are looking at the neural network, which is an equivalent electrical circuit model of workings of the brain. It is an equation or algorithm that is rendered in software. The software runs– these days – on a GPU (graphical processor unit).

The neural net analogy gained the sanction of the academy with the award of the Nobel Prize for Physics (2024). That went to John J. Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Both men formulated neural networks in the 1980s, that built on a near-half century of previous work.

The Nobelists would be first to admit the limits of analogies are always evident, although the rapid rise and hype of Generative AI –now called “Agentic AI” – obscures that from the general public. [Read more…] about March of Analogies and AI neural networks

Family Affair: Through the Elon Darkly

April 10, 2025 By Jack Vaughan

There’s not a lot of wailing in grief this week for Elon Musk, who loses $100-billion-plus in stock value as Stop Wall Streeters spray paint his show room windows; and as some investors call for his removal at the company Tesla’s helm. What’s a prototype man of the future to do? Who cares?

I’d venture that most Americans have seen enough of him, and even President Donald Trump seems tired of Musk’s presence. Mike Meyers hilarious parody of this Nerd for All Seasons has knocked him from his very high horse.

But let’s flash back. It was just a few years ago that Musk was a paradigm of modern engineering, conqueror of the realms of electric cars and rocket propulsion. He was set to implant helpful silicon chips in needy craniums, and to bore a vacuum sealed tunnel all the way from Los Angeles to Las Vegas.

[Read more…] about Family Affair: Through the Elon Darkly

Observations: Need real-time analytics? There’s a StarTree Cloud for that

March 19, 2025 By Jack Vaughan

Led by former LinkedIn and Uber hands, Mountain View, California-based Star Tree looks to drive wider use of real-time analytical applications based around the Apache Pinot OLAP engine. This kind of technology has many uses in a world where great volumes of data arrive at ultrahigh velocity.

TECHNOLOGY EMPLOYED
If “OLAP” had marketing magic, that was a long time ago. OLAP was an early attempt to go beyond relational database and data warehouse limitations, but Apache Pinot today is probably better described in today’s parlance as a column-oriented data store, and its competition can come from any of the many databases to arise in recent years. Apache Pinot is designed to handle fast ingestion of data, and fast joins on users’ SQL queries. Since the StarTree focus is on cloud computing — its found in the three big cloud providers’ marketplaces – it can and also has been called a Database As A Service (DBaaS). [Read more…] about Observations: Need real-time analytics? There’s a StarTree Cloud for that

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 16
  • Go to Next Page »

Progressive Gauge

Copyright © 2026 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack