• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Blog

‘Co-Evolution’ and the trend of ‘Cyberselfish’

December 10, 2025 By Jack Vaughan

PezOne finds oneself on a given day — as the semiretired do — talking about the old days. In this case, the days of Stewart Brand’s Whole Earth Catalog and — more explicitly — his follow-up Co-Evolution Quarterly.

Rattling on with a colleague, we both in our own ways recalled a time when the counterculture had some sway over technology. And that seemed to support the notion of a hopeful future. It seemed to ride on Brand’s vision. [Read more…] about ‘Co-Evolution’ and the trend of ‘Cyberselfish’

Information Examiner October 2025 – Connectors tackle AI with MCP

October 2, 2025 By Jack Vaughan

The rise of Agentic AI and Large Language Models (LLMs) is transforming classic data integration, with the Model Context Protocol (MCP) emerging as a key piece of the new tooling. This protocol is changing how users interact with model data, and traditional data companies now race to meet the new requirements.

ISVs and enterprises can’t move fast enough on the new AI front, and traditional business databases will often be central. BI reporting will be an early target. Software architecture leads may turn increasingly to data connectivity providers like CData Software if they are going to move fast without breaking things. [Read more…] about Information Examiner October 2025 – Connectors tackle AI with MCP

Progressive Gauge Information Examiner – Week of Sept 1 2025

September 1, 2025 By Jack Vaughan

Enterprise AI attention turns to data architecture – Getting Generative AI apps up and running is the first problem that enterprise teams encounter. Then comes maintaining those systems. As Sanjay Mohan points out in a recent blog post, this makes for a moving target, as data flowing into the AI engines must be continually monitored. Constant changes are inherent in active computer data. Read more.

Speaking of Agentic AI Data Architecture – Connecting-to and building-out Generative AI is going to mean some changes in the way data architects approach solutions. Mohan Varthakavi’s recent Diversity.net “Reimagining Data Architecture for Agentic AI” post addresses the data challenge from an overarching perspective, and it is an interesting take. Read more.

Data to the fore as Snowflake and Nvidia report – Development of specialized AI agents will be key indicator for future of generative AI, analyst Mandeep Singh tells Bloomberg podcast audience. Surprisingly perhaps, this puts Snowflake in a more competitive position as AI rules the airwaves. Read more.

Lab note: On Reading Sime’s “Lise Meitner”

August 5, 2025 By Jack Vaughan

I have been reading Lise Meitner: A Life in Physics [U Cal Press;1996] by Ruth Lewin Sime. This is an important work, vital to gaining understanding of evolving experimentation in pursuit of the nature of the atom in the 20th Century. But the greater importance is in the light it shines on Lise Meitner, who continues to arise from the footnotes of too many histories, to take a chief post in the history of science — this in the face of a crushing force of dehumanization. [Read more…] about Lab note: On Reading Sime’s “Lise Meitner”

The Last Word on AI This Time

June 24, 2025 By Jack Vaughan

When I first heard of Generative AI, I was skeptical. Although it was clearly a gigantic step forward for machine learning.  I covered the Hadoop/Big Data era – for five years. As noted before, we would ask what do we do with Big Data? The answer it turned out was Machine Learning. But it was complex, hard to develop, difficult to gather data for, and ROI was complicated or ephemeral. People would bemusedly ask if it had uses east of Oakland Bay. My experience with Big Data colored my perspective on Generative AI.

Generative AI requires great mountains of data to work. Herding that data is labor intensive. As with previous machine learning technologies, getting desired results from the model is difficult. Some engineer will come up with a tool to solve problems. And find some VC, and startups race about. Few apps get from prototype proof-of-concept to operational. Few pay their own way.

 

Benedict Evans, Geoffrey Hinton and Gary Marcus are just some of the people who more ably than I critiqued the LLM. But great excitement was unleashed on the global public. And there wasn’t much of an ear for their ‘wait a minute’ commentary.

 

But early this year, Deep Seek seemed to show that the rush to Generative AI – the rush of money for electricity and land to deploy widely – should be more carefully considered. Deep Seek was an event that arrived at a receptive moment.

 

It seems in a way a textbook case of technology disruption. That advocates were blind to the limits of scalability for LLMs, coming up with greater and greater kluges – think nuclear power – SMRs or other.

 

Meanwhile, a crew at a resource-strapped Chinese quant-tank saw ends around.  The designers focused on efficiency rather than brute force, employing methods such as reduced floating-point precision, optimized GPU instruction sets, and AI distillation.

Engineers love benchmarks – they love to tear them down! Benchmarks are biased, yes. Even a child can figure that. But … when you look at Deep Seek’s recipe, it is clever engineering. None of it is new. Others have worked on TinyML for years. The type of hardware optimizations they did were bread and butter years ago. There are plenty of computer scientists the are working to get around Generative AI’s issues [scale/cost/hallucination/use cases being the big ones]. These issues make this Silicon Valley baby a suspect redeemer. With respect, Jim Cramer sometimes oversteps his actual areas of expertise.

That Deep Seek moment – don’t let anyone tell you it is more or less than that – has just been followed by an upswing in the  contrarian view on LLMs.  A world that would have nothing of it a year ago is now seriously discussing “The Illusion of Thinking” – a paper by Apple researchers that questions the true reasoning capabilities of large language models.

“The Illusion of Thinking” This may put a pall on Agentic AI, which has conveniently arisen as the answer to Generative AI’s first task: to finagle its way into real world workflows. Now, as summer begins, there is more of an ear for voices that cite the challenges, obstacles, and over-sell that have marked the last 24 months or so of the AI era. That can be helpful in the big task of understanding and taming LLMs for greater usage.

Penultimately,  we have to hold in mind some contrary points going forward. It is not LLMs are not valuable, just that they have limits that hyperbole has obscured.

Inspiring to me was a recent post by network systems expert and author Bruce Davie. He reminds us that a rational middle path is often preferable to the extreme predictions of doom, bust, or boom that characterize today’s AI tempest. Humans can skew but the mean always calls, and we may be seeing that now. [Thanks to Davie for cueing me to New Yorker writer Joshua Rothman and, in turn F. Scott Fitzgerald, he of the adage of holding “two opposed ideas  the mind at the same time” seen above.]

This seems like a good time to let the Substack cogitate on these great matters. While I may post yet this season, I am kicking up my heels, and dreaming about slapping skeeters in Up North in Wisconsin. And taking “Lise Meitner: A Life in Physics” down from the shelf.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 16
  • Go to Next Page »

Primary Sidebar

Twitter Feed Until Mad Musk Man Goes

Tweets by JackIVaughan

Recent Posts

  • ‘Co-Evolution’ and the trend of ‘Cyberselfish’
  • Information Examiner October 2025 – Connectors tackle AI with MCP
  • Progressive Gauge Information Examiner – Week of Sept 1 2025
  • Lab note: On Reading Sime’s “Lise Meitner”
  • The Last Word on AI This Time

Recent Comments

    Archives

    • December 2025
    • October 2025
    • September 2025
    • August 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • July 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • November 2021
    • October 2021
    • August 2021
    • April 2021
    • March 2021
    • January 2021
    • December 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • July 2019

    Categories

    • AI
    • Computing
    • Data
    • Development
    • History of Science
    • IoT
    • Observability
    • Podcast
    • Political World
    • Quantum Computing
    • Random Notes
    • The Trade
    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Progressive Gauge

    Copyright © 2025 · Jack Vaughan · Log in

    • Home
    • About
    • Blog
    • Projects, Samples and Items
    • Video
    • Contact Jack