• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Data

Random Notes: Polar Vortex Watch – Jellification – They Call the Winter Storm “Fern”

February 27, 2026 By Jack Vaughan

Storm Fern 2026
Winter Storm Fern 2026 – NOAA

 

It’s early to tell whether the recent generation of AI advances will go beyond recommendation engines and fraud detection—the two shining stars of the big data era. The distinction is simple: If I fudge a recommender, I lose a possible sale; if I fail a fraud detection, I lose money I already have. [Read more…] about Random Notes: Polar Vortex Watch – Jellification – They Call the Winter Storm “Fern”

Information Examiner October 2025 – Connectors tackle AI with MCP

October 2, 2025 By Jack Vaughan

The rise of Agentic AI and Large Language Models (LLMs) is transforming classic data integration, with the Model Context Protocol (MCP) emerging as a key piece of the new tooling. This protocol is changing how users interact with model data, and traditional data companies now race to meet the new requirements.

ISVs and enterprises can’t move fast enough on the new AI front, and traditional business databases will often be central. BI reporting will be an early target. Software architecture leads may turn increasingly to data connectivity providers like CData Software if they are going to move fast without breaking things. [Read more…] about Information Examiner October 2025 – Connectors tackle AI with MCP

Data to the fore as Snowflake and Nvidia report

Data architects will have an increasing role in AI discussions, and may offer their usual cautionary perspective. When people ask how soon GenAI will take hold, they might keep this in mind.

So far, the major gauges for GenAI progress have been the hyperscalers’ capex accounting and Nvidia’s quarterly results. The latter came in last week, and the general picture appeared lackluster to muddy. Only Nvidia’s tremendous three-year upsurge can explain the use of the ‘lackluster’ term;  its sales jumped +56% in the quarter. [See footnote to this story.]

What’s happening with the Generative AI juggernaut may  hinge a lot on results of firms like Snowflake, makers of the noted cloud-based data platform.

A quick view on that came via Mandeep Singh, Global Head for Technology Research at Bloomberg. He spoke on Bloomberg Podcasts just as both Nvidia and Snowflake numbers came in. The context was today’s general search for a defined timeline for GenAI deployment and monetization.

What’s going on now, Singh says, is that companies are plying through their own enterprise data. They do this in order to create specialized AI agents that bring the power of LLMs to desktops.

“That’s where someone like Snowflake or MongoDB benefits, because your training is getting a lot more specialized depending on the use case,” he said. “It’s not just about the compute cluster anymore, but also about the data. I think that points to why Snowflake is doing well.”

In fact, the company beat Wall Street expectations for both earnings per share (EPS) and revenue. Quarterly revenue grew 32% Y2Y, to $1.14 billion. The company cited adoption of AI tools as a growth driver.

The importance of cloud and on-premises databases and data warehouses in this regard would be emphasized this week as OpenAI bought Statsig and its  “data warehouse native” platform for, of course, something in the neighborhood of $1 Billion. -JV

~~~~~~~~~~~~~~~~~~~~

Footnote – Nvidia leader Jensen Huang and his company are caught right now between China leader Xi Ping and US leader Donald Trump, both pursuing new forms of State Capitalism or Capitalist Statism. The Era is still searching for a name.

Speaking of Agentic AI data architecture

Harvard Stadium

Connecting-to and building-out Generative AI processing in day-to-day business is not the easy task it may seem, given the colorful renderings of the future that ChatGPT is known for.

Sometimes the AI train seems stuck at the HyperScaler depot. Recent news shows that AI leader Nvidia tech, while firmly established in the top rank HyperScaler houses – is still edging out, and not yet hurtling forward in the wider space.

Another view of data technology’s potential role in changing that scenario is seen in a recent Diversity.net blog post by Mohan Varthakavi, Vice President of Software Development, AI and Edge, Couchbase. “Reimagining Data Architecture for Agentic AI” addresses the data challenge from an overarching perspective, and it is an interesting take.

Varthakavi said traditional data architectures are insufficient for the demands of Agentic AI, that being the medium generally expected to bring Generative AI into wide use.

Here he refers to Generative AI’s need to work on unstructured data – that is, something outside the realm of SQL and often taking the form of human speech. Diligently prepping such data for GenAI, and finding you may not have enough to feed the AI beast, is a common problem today. And, as Agentic AI is currently envisioned, it means feeding numerous agents collaborating around a host of tasks.

Still, as Varthakavi, writes: “Advanced unstructured data processing is quickly emerging as the defining differentiator between AI leaders and followers.” It’s not that different than it was in the Big Data era. One of Big Data’s hallmark “V’ traits – Variety – still looms larger than ever.

If Agentic AI is going to work, a fundamental departure from traditional data frameworks is called for, he said. Going forward, the back-and-forth music of conversations, the probabilistic nature of machine learning, and the deep complexity of human language will call the shots, and shift the approaches used by those who apply system architecture.

The shift to agentic AI … marks a migration from traditional rule-based logic toward architectures centered around language understanding. This isn’t as simple as swapping one model for another; it requires a rethinking of how systems are composed. Large language models can provide powerful general capabilities, but they are not equipped to answer every question pertaining to a company’s specific business domain. 

Also noted by Varthakavi is the growing need for architects to be aware of new types of  data dependency risks. These appear when agents that use LLMs may receive corrupted or insufficiently updated data. Latency pressures also confront the designers, as workloads and user-machine dialogs become more complex.

Just as important in the Brave New World are conversation storage systems that provide persistent memory, vector search engines that quickly contextualize customer queries in the data format that  machine learning likes best, and dynamic memory management to support real-time context updates.

Few people arrive at work in the morning with a complete mastery of these kinds of requirements. But these underlie the elements implied in the futuristic advertisements for Future AI.

Enterprise AI attention turns to data architecture

Pascaline
Pascaline

Getting Generative AI apps up and running is the first problem that enterprise teams encounter. Then comes maintaining those systems. As Sanjay Mohan points out in a recent blog post on data quality and AI projects, this makes for a moving target, as data flowing into the AI engines must be continually monitored. Constant changes are inherent in active computer data.

Once again, it is meta data – dull as the day is long – that is a  key factor in managing data for these new AI platforms. But there is more.  In this thoughtful blog, industry analyst Mohan enumerates a host of quality, reliability, access control, lineage, and observability processes that organizations must master in order to turn AI dreams into reality.

Why is the need for good governance more important now? Mohan writes:

Because more people are accessing more data for more business use cases than ever before. Without trusted and reliable data for AI and analytics, the outcomes will be poor, time and money will be wasted, and business leadership will lose enthusiasm for and confidence in AI and analytics. 

He goes on to emphasize structured approaches to governing data for AI.

Mohan is spot on here. He and other go-to data experts have been on the case since about Day One of the great Gen AI rising. They warn these data pipelines must be continuously adapted to deal with the volatility of data in the real world. It was a lesson revisited during the Big Data days – but as Gen AI took shape it’s been one PT Barnum’s of high tech have found convenient to gloss over.

It took two-plus years for such for “data caution” to gain much of a hearing amid the AI swarm. Like summer football camps, data governance for AI requires a continual focus on blocking and tackling. It’s important although it comes without the excitement of the long passes that will be remembered from the Sunday games. [Ed.Note: The Correspondent promises no more sports analogies for 12 months.]

 

 

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Progressive Gauge

Copyright © 2026 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack