• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Data

Information Examiner October 2025 – Connectors tackle AI with MCP

October 2, 2025 By Jack Vaughan

The rise of Agentic AI and Large Language Models (LLMs) is transforming classic data integration, with the Model Context Protocol (MCP) emerging as a key piece of the new tooling. This protocol is changing how users interact with model data, and traditional data companies now race to meet the new requirements.

ISVs and enterprises can’t move fast enough on the new AI front, and traditional business databases will often be central. BI reporting will be an early target. Software architecture leads may turn increasingly to data connectivity providers like CData Software if they are going to move fast without breaking things. [Read more…] about Information Examiner October 2025 – Connectors tackle AI with MCP

Data to the fore as Snowflake and Nvidia report

Data architects will have an increasing role in AI discussions, and may offer their usual cautionary perspective. When people ask how soon GenAI will take hold, they might keep this in mind.

So far, the major gauges for GenAI progress have been the hyperscalers’ capex accounting and Nvidia’s quarterly results. The latter came in last week, and the general picture appeared lackluster to muddy. Only Nvidia’s tremendous three-year upsurge can explain the use of the ‘lackluster’ term;  its sales jumped +56% in the quarter. [See footnote to this story.]

What’s happening with the Generative AI juggernaut may  hinge a lot on results of firms like Snowflake, makers of the noted cloud-based data platform.

A quick view on that came via Mandeep Singh, Global Head for Technology Research at Bloomberg. He spoke on Bloomberg Podcasts just as both Nvidia and Snowflake numbers came in. The context was today’s general search for a defined timeline for GenAI deployment and monetization.

What’s going on now, Singh says, is that companies are plying through their own enterprise data. They do this in order to create specialized AI agents that bring the power of LLMs to desktops.

“That’s where someone like Snowflake or MongoDB benefits, because your training is getting a lot more specialized depending on the use case,” he said. “It’s not just about the compute cluster anymore, but also about the data. I think that points to why Snowflake is doing well.”

In fact, the company beat Wall Street expectations for both earnings per share (EPS) and revenue. Quarterly revenue grew 32% Y2Y, to $1.14 billion. The company cited adoption of AI tools as a growth driver.

The importance of cloud and on-premises databases and data warehouses in this regard would be emphasized this week as OpenAI bought Statsig and its  “data warehouse native” platform for, of course, something in the neighborhood of $1 Billion. -JV

~~~~~~~~~~~~~~~~~~~~

Footnote – Nvidia leader Jensen Huang and his company are caught right now between China leader Xi Ping and US leader Donald Trump, both pursuing new forms of State Capitalism or Capitalist Statism. The Era is still searching for a name.

Speaking of Agentic AI data architecture

Harvard Stadium

Connecting-to and building-out Generative AI processing in day-to-day business is not the easy task it may seem, given the colorful renderings of the future that ChatGPT is known for.

Sometimes the AI train seems stuck at the HyperScaler depot. Recent news shows that AI leader Nvidia tech, while firmly established in the top rank HyperScaler houses – is still edging out, and not yet hurtling forward in the wider space.

Another view of data technology’s potential role in changing that scenario is seen in a recent Diversity.net blog post by Mohan Varthakavi, Vice President of Software Development, AI and Edge, Couchbase. “Reimagining Data Architecture for Agentic AI” addresses the data challenge from an overarching perspective, and it is an interesting take.

Varthakavi said traditional data architectures are insufficient for the demands of Agentic AI, that being the medium generally expected to bring Generative AI into wide use.

Here he refers to Generative AI’s need to work on unstructured data – that is, something outside the realm of SQL and often taking the form of human speech. Diligently prepping such data for GenAI, and finding you may not have enough to feed the AI beast, is a common problem today. And, as Agentic AI is currently envisioned, it means feeding numerous agents collaborating around a host of tasks.

Still, as Varthakavi, writes: “Advanced unstructured data processing is quickly emerging as the defining differentiator between AI leaders and followers.” It’s not that different than it was in the Big Data era. One of Big Data’s hallmark “V’ traits – Variety – still looms larger than ever.

If Agentic AI is going to work, a fundamental departure from traditional data frameworks is called for, he said. Going forward, the back-and-forth music of conversations, the probabilistic nature of machine learning, and the deep complexity of human language will call the shots, and shift the approaches used by those who apply system architecture.

The shift to agentic AI … marks a migration from traditional rule-based logic toward architectures centered around language understanding. This isn’t as simple as swapping one model for another; it requires a rethinking of how systems are composed. Large language models can provide powerful general capabilities, but they are not equipped to answer every question pertaining to a company’s specific business domain. 

Also noted by Varthakavi is the growing need for architects to be aware of new types of  data dependency risks. These appear when agents that use LLMs may receive corrupted or insufficiently updated data. Latency pressures also confront the designers, as workloads and user-machine dialogs become more complex.

Just as important in the Brave New World are conversation storage systems that provide persistent memory, vector search engines that quickly contextualize customer queries in the data format that  machine learning likes best, and dynamic memory management to support real-time context updates.

Few people arrive at work in the morning with a complete mastery of these kinds of requirements. But these underlie the elements implied in the futuristic advertisements for Future AI.

Enterprise AI attention turns to data architecture

Pascaline
Pascaline

Getting Generative AI apps up and running is the first problem that enterprise teams encounter. Then comes maintaining those systems. As Sanjay Mohan points out in a recent blog post on data quality and AI projects, this makes for a moving target, as data flowing into the AI engines must be continually monitored. Constant changes are inherent in active computer data.

Once again, it is meta data – dull as the day is long – that is a  key factor in managing data for these new AI platforms. But there is more.  In this thoughtful blog, industry analyst Mohan enumerates a host of quality, reliability, access control, lineage, and observability processes that organizations must master in order to turn AI dreams into reality.

Why is the need for good governance more important now? Mohan writes:

Because more people are accessing more data for more business use cases than ever before. Without trusted and reliable data for AI and analytics, the outcomes will be poor, time and money will be wasted, and business leadership will lose enthusiasm for and confidence in AI and analytics. 

He goes on to emphasize structured approaches to governing data for AI.

Mohan is spot on here. He and other go-to data experts have been on the case since about Day One of the great Gen AI rising. They warn these data pipelines must be continuously adapted to deal with the volatility of data in the real world. It was a lesson revisited during the Big Data days – but as Gen AI took shape it’s been one PT Barnum’s of high tech have found convenient to gloss over.

It took two-plus years for such for “data caution” to gain much of a hearing amid the AI swarm. Like summer football camps, data governance for AI requires a continual focus on blocking and tackling. It’s important although it comes without the excitement of the long passes that will be remembered from the Sunday games. [Ed.Note: The Correspondent promises no more sports analogies for 12 months.]

 

 

Oracle gets the memo

October 24, 2022 By Jack Vaughan

Conf
Ellison updates Oracle Cloud World crowd.

THE SKEPTICAL ENQUIRER – Pride bordering on arrogation is a typical trait of successful enterprise tech companies – it’s definitely been the case with Oracle Corp. Many times it’s met the challenge of a changing computing paradigm with brio, taking its lead from irascible founder Larry Ellison, but the sum substance of its claims would be argued.

I may be a tea leaf reader here, but I see a subtle shift in Ellison’s irascibility in the minutes of last week’s Oracle Cloud World 2022 event.

Ellison’s lively quotes have long been a reporter’s friend. But it’s hard to relay his cloud computing pitches without adding footnotes. The consensus big cloud players are AWS, Google and Microsoft – while enterprise incumbents like Oracle, Teradata, and IBM are cited as tertiary at best.

Overall, Oracle’s financial growth has been modest in the last decade, while its cloud claims have been bold. It is by no means alone in the creative work it has done to define cloud, which is a promise of future growth, on its accounting ledger.

But, in terms of pitching everyday database automation advances of Oracle Autonomous Database [It’s enterprise blocking and tackling, right?] as the one-true self-driving database for the cloud of the future – well, Oracle has no equal there.

Its latest tactic is to boost its acquisition of health system giant Cerner as a great opportunity to rapidly modernize Cerner’s systems, move them to the Oracle Cloud, and count them as such on the financial report.

As I said, the company still experiences gradual overall gains, so I may be talking style rather than substance when I say they missed some big new tech boats, although they revisit these product line gaps from time to time. I’m talking about databases, development tools and persistence stratagems that all called for less spend and new thinking:

 

  • Oracle missed opportunities to expand its MySQL brand at a time when competitive PostgreSQL database versions were becoming go-to players in AWS and Microsoft stables for open-source distributed SQL. In September, the company came out with a bigger/better MySQL known as MySQL the Heatwave Database Service.
  • In fact, Oracle played down, ignored or glossed over inexpensive clustered open-source databases – both SQL and NoSQL — despite having the original Berkeley NoSQL stalwart in its hand.
  • JSON-oriented document database development led by MongoDB is a particular thorn. Oracle, like others, found ways to bring JSON to its RDBMS, but Mongo is still on the up stroke. Oracle last week addressed what it called a mismatch between JSON and the Oracle SQL database in the form of “JSON Relational Duality” – a new developer view supported in Oracle Database 23c.
  • Also, Oracle was slow to support cloud-friendly S3-style object storage. Not surprising in that the great goal is to place data into an Oracle database. But maybe it doesn’t have to be Oracle Autonomous Database. Last week, Oracle described MySQL Heatwave Lakehouse which may be a step in a broader direction of support.

This latter trait, object storage, seems to be getting a bit more of Oracle’s attention, as Snowflake Inc. rises on the back of its 3-in-1 distributed cloud data warehouse, data lake and data lake house. At Oracle’s yearly confab, Snowflake seemed to have garnered leader Ellison’s grudging admiration.

No small feat, that!

It’s a little hard to directly discern Ellison-speak-on-the-page. But his presentation moves salesfolks – and customers too. This recalls Curt Monash once calling him “one of the great public speakers and showmen of the era.”

What did Ellison tell the conference crowd? Well, besides a lot about how Oracle technology helped deliver Covid-19 vaccines, he spoke about the cloud market. People are moving from single-cloud to multi-cloud architectures, Ellison told the crowd at Oracle Cloud World 2022.

“The fact that this is happening is changing the behavior of technology providers…So the first thing that happened as people use multiple clouds is that service providers started deploying new services in multiple clouds, maybe most famously its Snowflake,” he said. Then, in a nod to the new highflier, he added, “And Oracle got the memo.”

“You know, we noticed, and we actually thought that was a good idea,” he said.

Of course, evolution to multi-cloud is a chance for Oracle to take another at bat in the cloud game. The bad news, as in the past, is that so much of the company’s effort is toward moving everything into the Oracle database. That is why any shifts to emphasize MySQL Heatwave would be notable.

Jack Vaughan is a writer and high-tech industry observer.

~~~~~~~~~~~~~~~~~~

My Back Pages
Data Lake Houses Join Data Lake in Big Data Analytics Race – IoT World Today – 2021
Purveyors of data lake houses and cloud data warehouses are many, and opinions on qualifications vary.

Oracle Cloud IaaS security gets superhero status from Ellison – TechTarget – 2018 [mp3 – Podcast]
It’s incremental!

Updated Oracle Gen 2 Cloud aims to challenge cloud leaders – TechTarget – 2018
The central product for Oracle is the database, and all the related tools that support the continued dependence of customers on the database.

On top of the RDB mountain – ADTmag 2001
Never lose the database sale: It’s tattooed on their foreheads.

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack