• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

computing

Information Examiner October 2025 – Connectors tackle AI with MCP

October 2, 2025 By Jack Vaughan

The rise of Agentic AI and Large Language Models (LLMs) is transforming classic data integration, with the Model Context Protocol (MCP) emerging as a key piece of the new tooling. This protocol is changing how users interact with model data, and traditional data companies now race to meet the new requirements.

ISVs and enterprises can’t move fast enough on the new AI front, and traditional business databases will often be central. BI reporting will be an early target. Software architecture leads may turn increasingly to data connectivity providers like CData Software if they are going to move fast without breaking things. [Read more…] about Information Examiner October 2025 – Connectors tackle AI with MCP

Why tinyML?

MARCH 26, 2021 — In about 2004 this reporter asked a top IBM software development leader what cloud computing looked like to him. “It looks like a mainframe,” he said with only half a smile. True enough, cloud is a centralized way of computing, which is beginning to raise questions.

One of which is: Will machine learning be forever in the “glass room?” That is the old-time descriptor for the home sweet home of the immortal mainframe era, where numbers got crunched and good ideas went to die.

Today, technologists are working to bring machine learning out of the closet and into the real world in the form of Edge computing.

For that to happen machine-made observations and decisions will have to succeed on individual chips in devices and on boards, far from the cloud data center where a lot of electrical power allows infinite compute.

For that to happen, machine learning at the edge, which is often more project than reality today, will have to become productized. It will have to work within much tighter constrains. That is the motivation behind TinyML, which — thank goodness — is more a way of doing things, than it is a standard or product.

Issues facing TinyML as it struggles to leave the cocoon are worth consideration. As with client server and other computing paradigm shifts, the outcome will rely on how teams on the cloud and on the edge deal with the details of implementation.

That was seen in a panel at this week’s tinyML Summit 2021. It afforded opportunity for such consideration. Here I am going to share some comments and impressions from a panel that featured expert implementers working to make it happen.

The lively panel discussion entitled “tinyML inference SW – Where do we go from here?” was moderated by Ian Bratt, Distinguished Engineer & Fellow, Arm. Joining Brat were Chris Lattner, President, Engineering and Product, SiFive; Tianqi Chen, CTO, OctoML; Raziel Alvarez, Technical Lead for PyTorch at Facebook AI; and Pete Warden, Technical Lead, Google. (A link to the  panel recording on YouTube is found at the bottom of this page.)

A familiar view emerged, one that showed the creators of the trained machine learning model handing off their work, hoping a dedicated engineer can make the code run in the end. That conjures the old saw about ‘throwing it over the wall,’ and hoping system programmers can do the finished carpentry.

The tableau suggested the objectives of the researchers in a sort of ivory tower of cloud machine learning were somewhat at odds with the objectives of the front-line inference engineers at the edge where cycle economy is paramount and power consumption is crucial.

That echoes yet another developer saw that goes ‘it worked on my machine’ – one of the classic crunch time excuses over the history of computing.

Other issues:

-It may take top gun skills to make a trained model work in practice. “Somebody has to do magic to get it into production,” said Raziel Alvarez.

-People are able to deploy these models but the effort is very considerable. The many different cogs in machine learning (for example, the link between a CPU and a GPU) have to be managed deftly. In practice that means  “people have to go outside their [practice] boundaries,” said T.Q. Chen.

-They hope to deploy inference on a variety of hardware, but each hardware version and type requires special tuning. And, low-level hardware changes can effect a cascading chain of changes all the way up the machine learning development stack. “As soon as you get heterogenous hardware, models tend to break,” said Peter Warden.

Hmmm, maybe that is enough on the ‘challenges.’ Obviously, people go at this to succeed, not to loll in obstacles. But obstacles go with the move to production for machine learning inference. As one tinyML Summit 2021 panelist said of recent history, “we have found a lot of what doesn’t work – we know what we don’t know.”

It will be interesting to see if and how the machine learning technology moves to the edge from the cloud. In architecture, the devil isn’t in the details, but in building, it is. What is likely is that the leap from science project to useful product will depend on the future work of the participants at tinyML Summit 2021 and other conferences to come. – Jack Vaughan

 

 

Innovations that emerge from bubbles – A look back at Google and AWS

[May 31, 2020] – For Google and Amazon, the early days after the Dot.Com Bubble in the young 2000s somewhat resembled the first days of manned flight.

Great minds were at work, but there was chewing gum and chicken wire too – or, in the case of Google and Amazon, Velcro, commodity disk drives and bug-swatting systems-level engineering.

The Dot.com Bubble is distant now, as we encounter a potentially worse financial crisis a’ brewing. So, a look back at these winning company’s original technical maneuverings may be in order.

Today, peering into the future tense, you are pretty safe in proposing there will be consolidation, mergers and acquisitions ahead for heated tech fields such as AI, machine learning and IoT. But, while the big get bigger, new stars emerge, as Google, Amazon and refocused editions of Apple and Microsoft proved in the wake of financial crisis in 2000 and 2008.

This reporter got a view into the early days of “fast, reliable and cheap” deployments at Usenix in Boston in June 2004. At a keynote there, Rob Pike described a Google application development mentality that led the company to take on the responsibility for developing its own fault tolerance. That report sheds light on the dawn of cloud computing, and is presented here: Velcro: Young Google’s sticky little secret. – Jack Vaughan

 

Quantum Supremacy: Are we not quite there yet?

Quantum Table Cloth
54 quantum logic gates (or qubits) (53 for computation) mark the Sycamore of Google.

The atmosphere is tense, uncertain, ominous … a reminder in a way of how it was in March of 1964 when the young Cassius Clay took on the larger Sonny Liston. All eyes are on Google and IBM, as they square off in a battle for quantum dominance that, some wags say, will not soon be settled.

A slip of a curfew gave indications Google would soon announce it had achieved Quantum Supremacy. This is a long-anticipated moment in some technology quarters, and Google’s competitors were on alert for something akin to a prize fight.

Quantum Supremacy is the moment at which researchers could declare quantum computers capable of solving problems that have been beyond the reach of classical computing machines. It is important to AI and big data users that may be finding the limits of silicon microprocessors that have tapped out Gordon Moore’s Law of never-ending computing improvements.

Now back to that missed curfew. Google Research’s collaborators at NASA had inadvertently posted a paper meant to be held until this week, when Nature magazine would publish the findings of quantum dominance. The miscue gave quantum competitor IBM time to prepare its defense, suggesting Google drastically mis-estimated the capabilities of today’s best classical computers in its analysis, and that quantum supremacy wasn’t such a big deal anyway.

Google had gauged it would take IBM’s Summit supercomputer 10,000 years to perform a random number validation task that its 54-gate Sycamore quantum engine could complete in 200 seconds. With enough memory, IBM countered, “we could do much better than 10,000 years.” IBM’s Summit could instead do the job in a couple of days, the company said.

IBM has reasonable bones to pick with Google’s quantum supremacy experiment. The small tempest obscures the likelihood that quantum computing is getting closer, but is still quite far off. That has been the status all along.

Google and IBM are only two of many players in the quantum quest. But their slight spat has put them at the forefront of attention.

For Google, quantum computing appears to be an exercise in pure research, to which any usefulness in clipping added time off Google searches would be a bonus.

For IBM on the other hand, quantum computing has become one of the fundamental elements in a quest – think Watson — to claim a prerogative to lead the next computing era.

Google’s announcement is late in some ways. Google has taken a particular interest in the quantum supremacy hurdle and, when I was researching the topic in 2017 – a very fevered year of what now seem like chimerical quantum advances — an announcement by Google had been fairly widely expected to be imminent.

That Google’s announcement has come nearly two years later with concomitant noise, flutter and thud is something that could have been anticipated. Going forward, error correction, entanglement, fault tolerance and other factors present obstacles at every step along the way.

Always stalking the effort: classic computers may still be capable of bone-jarring breakthroughs.

What will be interesting to watch is the interplay between classical computing advances and quantum computing advances.

While classical computing has been good enough to continually exceed a lot of needs, scientists doing research at the atomic and molecular level have tapped out much of classical approaches’ potential.

Putting this week’s quantum family feud aside, the work on machine learning and quantum simulation by scientists could provide the kind of inflection point that is worthy of the computing community’s expectant wait. Both IBM and Google seem to agree that advances in materials science could be a breakthrough application.

Related

Quantum computing apps creep forwardly – SearchEAI (2017)
Timeline of quantum – Wiki
On quantum supremacy – IBM
Google’s quantum Nature article – Nature
Quantum supremacy: The gloves are off – Scott Araonson blog (100+ comments and counting)
Chemistry as quantum computing killer app – CEN.ACS

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack