• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Data

Information Examiner October 2025 – Connectors tackle AI with MCP

October 2, 2025 By Jack Vaughan

The rise of Agentic AI and Large Language Models (LLMs) is transforming classic data integration, with the Model Context Protocol (MCP) emerging as a key piece of the new tooling. This protocol is changing how users interact with model data, and traditional data companies now race to meet the new requirements.

ISVs and enterprises can’t move fast enough on the new AI front, and traditional business databases will often be central. BI reporting will be an early target. Software architecture leads may turn increasingly to data connectivity providers like CData Software if they are going to move fast without breaking things. [Read more…] about Information Examiner October 2025 – Connectors tackle AI with MCP

Progressive Gauge Information Examiner – Week of Sept 1 2025

September 1, 2025 By Jack Vaughan

Enterprise AI attention turns to data architecture – Getting Generative AI apps up and running is the first problem that enterprise teams encounter. Then comes maintaining those systems. As Sanjay Mohan points out in a recent blog post, this makes for a moving target, as data flowing into the AI engines must be continually monitored. Constant changes are inherent in active computer data. Read more.

Speaking of Agentic AI Data Architecture – Connecting-to and building-out Generative AI is going to mean some changes in the way data architects approach solutions. Mohan Varthakavi’s recent Diversity.net “Reimagining Data Architecture for Agentic AI” post addresses the data challenge from an overarching perspective, and it is an interesting take. Read more.

Data to the fore as Snowflake and Nvidia report – Development of specialized AI agents will be key indicator for future of generative AI, analyst Mandeep Singh tells Bloomberg podcast audience. Surprisingly perhaps, this puts Snowflake in a more competitive position as AI rules the airwaves. Read more.

The Last Word on AI This Time

June 24, 2025 By Jack Vaughan

When I first heard of Generative AI, I was skeptical. Although it was clearly a gigantic step forward for machine learning.  I covered the Hadoop/Big Data era – for five years. As noted before, we would ask what do we do with Big Data? The answer it turned out was Machine Learning. But it was complex, hard to develop, difficult to gather data for, and ROI was complicated or ephemeral. People would bemusedly ask if it had uses east of Oakland Bay. My experience with Big Data colored my perspective on Generative AI.

Generative AI requires great mountains of data to work. Herding that data is labor intensive. As with previous machine learning technologies, getting desired results from the model is difficult. Some engineer will come up with a tool to solve problems. And find some VC, and startups race about. Few apps get from prototype proof-of-concept to operational. Few pay their own way.

 

Benedict Evans, Geoffrey Hinton and Gary Marcus are just some of the people who more ably than I critiqued the LLM. But great excitement was unleashed on the global public. And there wasn’t much of an ear for their ‘wait a minute’ commentary.

 

But early this year, Deep Seek seemed to show that the rush to Generative AI – the rush of money for electricity and land to deploy widely – should be more carefully considered. Deep Seek was an event that arrived at a receptive moment.

 

It seems in a way a textbook case of technology disruption. That advocates were blind to the limits of scalability for LLMs, coming up with greater and greater kluges – think nuclear power – SMRs or other.

 

Meanwhile, a crew at a resource-strapped Chinese quant-tank saw ends around.  The designers focused on efficiency rather than brute force, employing methods such as reduced floating-point precision, optimized GPU instruction sets, and AI distillation.

Engineers love benchmarks – they love to tear them down! Benchmarks are biased, yes. Even a child can figure that. But … when you look at Deep Seek’s recipe, it is clever engineering. None of it is new. Others have worked on TinyML for years. The type of hardware optimizations they did were bread and butter years ago. There are plenty of computer scientists the are working to get around Generative AI’s issues [scale/cost/hallucination/use cases being the big ones]. These issues make this Silicon Valley baby a suspect redeemer. With respect, Jim Cramer sometimes oversteps his actual areas of expertise.

That Deep Seek moment – don’t let anyone tell you it is more or less than that – has just been followed by an upswing in the  contrarian view on LLMs.  A world that would have nothing of it a year ago is now seriously discussing “The Illusion of Thinking” – a paper by Apple researchers that questions the true reasoning capabilities of large language models.

“The Illusion of Thinking” This may put a pall on Agentic AI, which has conveniently arisen as the answer to Generative AI’s first task: to finagle its way into real world workflows. Now, as summer begins, there is more of an ear for voices that cite the challenges, obstacles, and over-sell that have marked the last 24 months or so of the AI era. That can be helpful in the big task of understanding and taming LLMs for greater usage.

Penultimately,  we have to hold in mind some contrary points going forward. It is not LLMs are not valuable, just that they have limits that hyperbole has obscured.

Inspiring to me was a recent post by network systems expert and author Bruce Davie. He reminds us that a rational middle path is often preferable to the extreme predictions of doom, bust, or boom that characterize today’s AI tempest. Humans can skew but the mean always calls, and we may be seeing that now. [Thanks to Davie for cueing me to New Yorker writer Joshua Rothman and, in turn F. Scott Fitzgerald, he of the adage of holding “two opposed ideas  the mind at the same time” seen above.]

This seems like a good time to let the Substack cogitate on these great matters. While I may post yet this season, I am kicking up my heels, and dreaming about slapping skeeters in Up North in Wisconsin. And taking “Lise Meitner: A Life in Physics” down from the shelf.

Observations: Need real-time analytics? There’s a StarTree Cloud for that

March 19, 2025 By Jack Vaughan

Led by former LinkedIn and Uber hands, Mountain View, California-based Star Tree looks to drive wider use of real-time analytical applications based around the Apache Pinot OLAP engine. This kind of technology has many uses in a world where great volumes of data arrive at ultrahigh velocity.

TECHNOLOGY EMPLOYED
If “OLAP” had marketing magic, that was a long time ago. OLAP was an early attempt to go beyond relational database and data warehouse limitations, but Apache Pinot today is probably better described in today’s parlance as a column-oriented data store, and its competition can come from any of the many databases to arise in recent years. Apache Pinot is designed to handle fast ingestion of data, and fast joins on users’ SQL queries. Since the StarTree focus is on cloud computing — its found in the three big cloud providers’ marketplaces – it can and also has been called a Database As A Service (DBaaS). [Read more…] about Observations: Need real-time analytics? There’s a StarTree Cloud for that

New tune: Oracle and AWS sign cloud pact at Oracle Cloud World

September 10, 2024 By Jack Vaughan

Since Oracle really got serious about the cloud back in 2018, its  ‘Oracle’s Generation 2 Cloud Platform’ has evolved in a number of ways without forestalling AWS’s ascent in the database management space.

 

So, that made Oracle Cloud World 2024 a great occasion to declare victory and shake hands with AWS, as the company had earlier done with Azure SQL maker Microsoft and Google Cloud.

 

Oracle’s reported cloud advances made it one of the brighter lights on the stock market this year, but the company still faces the challenge to boost capex spending in order to go toe-to-toe with big cloud players. The biggies are feverishly building out bigger cloud data centers as Generative AI workloads grow. Oracle is re-defining a cloud region to include some smaller cloud setups.

 

This latest handshake includes the launch of Oracle Database@AWS, a new offering that “allows customers to access Oracle Autonomous Database on dedicated infrastructure and Oracle Exadata Database Service within AWS.” Workloads running on Oracle RAC are also covered.

 

The announcement eases migration headaches, Brian Tilzer, Chief Digital, Analytics and Technology Officer, Best Buy, said in a statement.

 

“This announcement makes it easier for us to move some of our database workloads to AWS,” concurred Joe Frazier, Head of Architecture and Platform Engineering, Fidelity Investments.

 

That means the Oracle database’s tight connection to Oracle infrastructure will be supported in all three of the big clouds. And it may save some capex on its own multiyear cloud data center rollout.

 

“What if we embedded an Oracle data center right into an AWS data center?” asked Oracle Chairman and CTO Larry Ellison at the event in Las Vegas. He outlined benefits to users in terms of  workload migration, system integration, low-latency and simple billing.

 

That’s not the tenor of question Ellison asked in past Oracle annual conferences, where he sometimes harshly lectured on alleged shortcomings of AWS offerings. This reporter has written before that Oracle’s pride often borders on arrogation. But a 2022 confab saw some lessening of the “Born to Raise Hell” tattooed version of Larry Ellison.

 

Ellison’s manner was further subdued this year, sitting with new AWS CEO Matt Garman. He was downright cordial.

 

Ellison told Garman that one of his biggest customers, Jaime Dimon, CEO of JP Morgan Chase, asked when the Oracle database was going to run on AWS each time they met. With the AWS deal, Ellison can scratch that item on the to-do list.

 

The bottom line: Oracle’s customers want multicloud support and Oracle better help by making these kind of deals. Multicloud means options, but options still seem to center on three big cloud providers. Oracle’s data prowess alone will not solve this. Will its efforts to move customers to its own cloud be due for reduced attention?

 

Now that this new rendition of Oracle cloud strategy is accomplished, maybe it is time to rename the yearly Oracle Conference Oracle AI World. Unsurprisingly, that was a very major push, both in Oracle’s quarterly report, and at its showcase conference this year. – J. Vaughan PG

~~~~~~~~~~~~~~~~~~~~~

Shown above: Crowd awaiting Ellison keynote.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack