• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Computing

The Last Word on AI This Time

June 24, 2025 By Jack Vaughan

When I first heard of Generative AI, I was skeptical. Although it was clearly a gigantic step forward for machine learning.  I covered the Hadoop/Big Data era – for five years. As noted before, we would ask what do we do with Big Data? The answer it turned out was Machine Learning. But it was complex, hard to develop, difficult to gather data for, and ROI was complicated or ephemeral. People would bemusedly ask if it had uses east of Oakland Bay. My experience with Big Data colored my perspective on Generative AI.

Generative AI requires great mountains of data to work. Herding that data is labor intensive. As with previous machine learning technologies, getting desired results from the model is difficult. Some engineer will come up with a tool to solve problems. And find some VC, and startups race about. Few apps get from prototype proof-of-concept to operational. Few pay their own way.

 

Benedict Evans, Geoffrey Hinton and Gary Marcus are just some of the people who more ably than I critiqued the LLM. But great excitement was unleashed on the global public. And there wasn’t much of an ear for their ‘wait a minute’ commentary.

 

But early this year, Deep Seek seemed to show that the rush to Generative AI – the rush of money for electricity and land to deploy widely – should be more carefully considered. Deep Seek was an event that arrived at a receptive moment.

 

It seems in a way a textbook case of technology disruption. That advocates were blind to the limits of scalability for LLMs, coming up with greater and greater kluges – think nuclear power – SMRs or other.

 

Meanwhile, a crew at a resource-strapped Chinese quant-tank saw ends around.  The designers focused on efficiency rather than brute force, employing methods such as reduced floating-point precision, optimized GPU instruction sets, and AI distillation.

Engineers love benchmarks – they love to tear them down! Benchmarks are biased, yes. Even a child can figure that. But … when you look at Deep Seek’s recipe, it is clever engineering. None of it is new. Others have worked on TinyML for years. The type of hardware optimizations they did were bread and butter years ago. There are plenty of computer scientists the are working to get around Generative AI’s issues [scale/cost/hallucination/use cases being the big ones]. These issues make this Silicon Valley baby a suspect redeemer. With respect, Jim Cramer sometimes oversteps his actual areas of expertise.

That Deep Seek moment – don’t let anyone tell you it is more or less than that – has just been followed by an upswing in the  contrarian view on LLMs.  A world that would have nothing of it a year ago is now seriously discussing “The Illusion of Thinking” – a paper by Apple researchers that questions the true reasoning capabilities of large language models.

“The Illusion of Thinking” This may put a pall on Agentic AI, which has conveniently arisen as the answer to Generative AI’s first task: to finagle its way into real world workflows. Now, as summer begins, there is more of an ear for voices that cite the challenges, obstacles, and over-sell that have marked the last 24 months or so of the AI era. That can be helpful in the big task of understanding and taming LLMs for greater usage.

Penultimately,  we have to hold in mind some contrary points going forward. It is not LLMs are not valuable, just that they have limits that hyperbole has obscured.

Inspiring to me was a recent post by network systems expert and author Bruce Davie. He reminds us that a rational middle path is often preferable to the extreme predictions of doom, bust, or boom that characterize today’s AI tempest. Humans can skew but the mean always calls, and we may be seeing that now. [Thanks to Davie for cueing me to New Yorker writer Joshua Rothman and, in turn F. Scott Fitzgerald, he of the adage of holding “two opposed ideas  the mind at the same time” seen above.]

This seems like a good time to let the Substack cogitate on these great matters. While I may post yet this season, I am kicking up my heels, and dreaming about slapping skeeters in Up North in Wisconsin. And taking “Lise Meitner: A Life in Physics” down from the shelf.

Source Code: Bill Gates’ Harvard Days

May 29, 2025 By Jack Vaughan

Gates Source Code BookWith the likes of Sam Altman and Elon Musk dashing about, we crouch for shelter now in an era where well-funded high-tech bros can live a life that was once reserved only for Doctor Strange.

That tends to make Bill Gates’ “Source Code: My Beginnings” (Knopf, 2025) a much more warmfy and life-affirming book than it might otherwise have been. In this recounting of his early days, and founding of Microsoft, he paints a colorful picture of a bright and excitable boy making good. Much of Source Code is set in “the green pastures of Harvard University.”

The boy wonder to be was born in Seattle in 1955, when computers were room sized, and totally unlike the consumer devices  which humans now ponder like prayer books as they walk city streets.

His family was comfortable and gave him a lot of room to engage a very curious imagination. His mother called it precociousness, and it’s  a trait he dampered down when he could. He had a fascination with basic analytical principles, which held him in stead when the age of personal computers dawned. [Read more…] about Source Code: Bill Gates’ Harvard Days

March of Analogies and AI neural networks

April 21, 2025 By Jack Vaughan

Hopfield circuit

Analogies provide us the tools to explore, discover, and understand invention, and  to communicate about the invention itself. Draft horses of old still stand fast as such an analogy.

In the 19th Century, James Watt estimated that a strong dray horse could lift 33,000 pounds by one foot in one minute. That provided a comparative measure for the steam engine, and it carries right through to the engines under the hoods of today’s F1 speedsters and NASCAR racers.

While a commonly accepted measure of AI performance is still developing for ChatGPT and other Generative AI systems, there is an analogy at work, and it lies in the neural firings of the brain.

Lift the hood on Generative AI and you are looking at the neural network, which is an equivalent electrical circuit model of workings of the brain. It is an equation or algorithm that is rendered in software. The software runs– these days – on a GPU (graphical processor unit).

The neural net analogy gained the sanction of the academy with the award of the Nobel Prize for Physics (2024). That went to John J. Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Both men formulated neural networks in the 1980s, that built on a near-half century of previous work.

The Nobelists would be first to admit the limits of analogies are always evident, although the rapid rise and hype of Generative AI –now called “Agentic AI” – obscures that from the general public. [Read more…] about March of Analogies and AI neural networks

Brief exploration: Generative AI circa 2025

December 10, 2024 By Jack Vaughan

Complexities in deploying Gen AI and LLMs dim the light on some initial hype. These are the days of engineering, coordination, and integration.

By Jack Vaughan

The early procession of ‘2025 Outlooks’ seems to start with a lot of looks back. It’s mostly about Generative AI, which has proved to be a stock market mover, stock art maker, and stock item in years in review.

The song remains the same but the tenor may change. ChatGPT is two years old, and its market shaking meteoric rise now feels a bit less meteoric, if only because changing the world requires effort.

[Read more…] about Brief exploration: Generative AI circa 2025

Get a grep

November 20, 2024 By Jack Vaughan

Details vary in different telling, but all agree that Unix operating system co-creator Ken Thompson developed grep while at Bell Labs. His impetus came from a request by a manager for a program that could search files for patterns.

 

Thompson had written and had been using a program, called ‘s’ (for ‘search’), which he debugged and enhanced overnight, the story goes. They nursed it and rehearsed it and grep sprung forth. “g” stands for “global,” “re” stands for “regular expression, “p” stands for “print.” To get something to display on screen in those days you used “print.” Thompson coming up with a software tool, and sharing it throughout the office, and perhaps beyond; to me that captured a moment in time.

 

I picked up on this based on an assigned mini-series for Data Center Knowledge. Also in this mini-series was a look at the roots of the kill command and the birth of SSH security. [links below].  I knew bits of the early Unix history but had to dig for this one.

[Read more…] about Get a grep

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 6
  • Go to Next Page »

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack