• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

AI

OpenAI GPT-4o Lands with Mini Thud or: Generative AI balances Hype and Reality in Chatbot Market Quest

May 19, 2024 By Jack Vaughan

It’s still too early to gauge Generative AI’s limits. That is another way of saying a circus atmosphere of hyperbole and demo theatrics is far from played out. The word “plateau” is heard today, and maybe a leveling off is only natural.

But now the uncertain space of ‘what it can’t do yet’ is mined each day. If Generative AI efforts plateau, and it merely changes the chatbot market as we know it, Generative AI will go down as a really big large-language disappointment.

This week’s OpenAI rollout of GPT-4o didn’t help. One can’t blame the OpenAI crew for trying their best to present awe inspiring on-stage demos as they saddle onto Danish Modern furniture set that bodes a comforting future. After all, there’s need to show their labs’ work is world changing or — barring that — fun.

The upstart’s was one among other episodes in the week’s AI Wars, as OpenAI’s demo was joined by Google I/O’s product roll outs on another stage in another corner of the Web, reported by Sean Michael Kerner and others.

For their part, the OpenAI crew walked through pet tricks, such as, asking the applications to translate “Why do we do linear equations?” into sparkling Italian.

Google’s show was just as breathless.

Yes, for OpenAI, the free app is a step into a new realm. (Although, as George Lawton points out, “free” is always an onion to unpeel.) And, yes, it vastly surpasses a voice-to-text demo of the 1990s. Does it move the bar much further than the Smart Speaker did in the mid-2010s? Let an army of pundits ponder this.

Our take: OpenAI’s announcement of something akin to a free-tier product was a bit short on awe. We’d second Sharon Goldman of Fortune who marked GPT-4o as “OpenAI’s emotive on steroids voice assistant.”

Of course, more accessible and easier entry for a wider range of people is OpenAI’s ticket to broadening into consumer markets. That’s where the killer app that justifies big valuations may be. Gain the consumer, and the enterprise follows.

That’s where OpenAI will meet the public and duke it out with friends like AWS, Google and Microsoft. There’s Apple too, which is likely now prepping spirited demos that show it has heard the bubbling cries of drowning users of Siri.

The next battle will be different than what has come before for heavily financed OpenAI. This stage in the technology’s evolution brings the OpenAI boffins down from the high ground, They say “hello whitebread-light demo patter” — just like Google, AWS or Microsoft product managers!

For a company that’s gained outsized attention in big headline deals for crucial infrastructure for big cloud players, it’s time to move toward apps. If it is to gain ground on a big scale, it will have to reach consumers. We take that as a less than nuanced theme in the GPT-4o roll out.

Cousin IoT: Brave New World Update

As if by chance, we sat in on Transforma’s report on IoT markets this very same week. While ably detailing the currents and eddies of IoT in the decades to come It seemed to convey a message relevant to Gen AI’s future course.

It’s been a long time since IoT first promised a brave new technology future — and such promises were never quite on the scale of Generative AI — but IoT has been grinding away gainfully, nevertheless.

IoT industry players have faced the same kind of existential challenge that GenAI is about to encounter. That is: The need to find a killer consumer app that it can power.

Transforma’s recent survey reports that there were 16.1 billion active IoT devices at the end of 2033. Annual device sales will grow from 4.1 billion in 2023 to 8.7 billion (a CAGR of 8%).

Yet, the world — even the industrial market within the bigger world — seems little changed. IoT’s top use cases, now and looking forward, fall short in terms of the energetic dynamism represented in early visions of IoT that looked more like StarTrek or the Jetsons.

Transforma looking toward the top three IoT use cases in 2033 cited 1- electronic labels; 2-building lights; and 3-headphones. You can come knocking because the van is not rocking, at least in terms of excitement. Still, these use cases represent real businesses.

Now, the assertion here — that Generative AI will be viewed in the future much as IoT is viewed today — is tentative. The agreement here is likely inexact … but may be useful for predicasts. Finally, my purpose here is not to put-down these young technologists’ efforts, but just to suggest that OpenAI and underlying Generative AI are in for a tough fight. — Jack Vaughan

Who took my soapbox? A note on media and AI

January 7, 2024 By Jack Vaughan

As 2023 came to its end, a New York Times suit affirmed a general impression that Generative AI and ChatGPT would find some friction on the way to a well-hyped, lead-pipe cinch and especially glorious future.

For those that have used this software, a recent improvement on existing machine learning interaction, it is not surprising. Microsoft’s/OpenAI’s ChatGPT and its main competitor, Google Bard, are breakthroughs. They provide a different level of access to the world’s knowledge.

Instead of pointing the searcher to brief fair-use citations of Web stories ala Google Search, ChatGPT and Bard provide somewhat thoughtful summaries of issues — ones that might do a junior or middle-level manager quite well when it’s time for yearly performance evaluations.

The new paradigm for Web activity threatens beleaguered publishers. They are not on a roll. The 4th estate is now painted as an unwanted gate keeper of opinion. Publishers that saw an advertising market pulverized by Google Search results now see an AI wunderkind about to drain publishing’s last pennies.

An anticipated slew of AI suits is now spearheaded by the Times, which filed a lawsuit citing OpenAI and Microsoft for copyright breach. Some go tsk, tsk. Wall Street oddsmakers that enjoyed an AI stock bump in 2023 were quickest to dismiss the Times’ chances versus ChatGPT. OpenAI has said it is in discussions with some publishers, and will work to achieve a beneficial arrangement.

Among the financial community, concern spreads that Generative AI’s magical abilities could be dampered. That is summed up by Danny Cevallos, MSNBC legal analyst, who worries about the impossible obligation to mechanize copyright royalties for AI citations across the globe.

The concerns comes despite the multidecade success of Silicon Valley’s Altruistic Surveillance movement. It can find you wherever you are, web users know. Still, Cevallos highlights the difficulty of, for example, finding and paying a copyright owner in a log cabin somewhere in Alaska.

“That would mean the end of future AI,” he said on CNBC’s Power Lunch. “It could be argued that the Times has to lose for progress to survive.”

We can anticipate that glory-bound Generative AI will find some rocks in its pathway in 2024 — but most will be in the form of stubborn, familiar IT implementation challenges. In the meantime, people that make a living in media will have to work to promote their interests, as other commercial interests chip away under the cover of AI progress. – Jack Vaughan

— 30 —

Is AWS a diminishing AI laggard – or is it right about on time?

December 12, 2023 By Jack Vaughan

Harvard Stadium

AWS is lagging and racing to catch up in Generative AI and Large Language Models (LLMs). Or so an industry meme holds.  When a smattering of new COVID  isolations end and the dust settles in the weeks after Amazon’s re:Invent 2023 conference in Las Vegas, that notion may be due for a revision.

Like all its competitors, AWS is working to put Generative AI technology in place – that means latching it on to other application offerings and adapting new tools and schemes for developers.

Among challenges that now face teams creating Generative AI applications are vector embeddings. These processes are an important step in handling data for consumption by Large Language Models (LLMs) that betoken a new era of chatbots. Perhaps as importantly, vector embeddings are also useful in slightly less futuristic applications, such as search, recommendation engines and personalization engines.

When Wall Street wags ask whether AWS is a diminishing AI laggard or peaking at just the right time, they probably don’t devote too much thought to the types of vectors machine learning engines are now churning. But building such “infrastructure” is important on the path to working AI.

AWS put vector techniques front and center in AI and data announcements at re:Invent 2023. A centerpiece of this is Amazon Titan Multimodal Embeddings, just out. The software works to convert images and short text into numerical representations that generative learning models can use. These are used to unpack the semantic meanings of data,  and to uncover important relations between data points.

Putting new-gen AI chatbots aside for the moment, it’s worth mentioning that recommendation and personalization tasks are likely beneficiaries of vector and AI progress. Once the province of Magnificent 7 Class vendors, these application types have become part of more and more organizations’ systems portfolios.

As you may imagine, they add considerable complexity to a developer’s typical day.  Here, Amazon AWS has set a course to simplify such work for customers.

Before some words on that, a few words about these kinds of embeddings: Vector embeddings are numerical representations created by LLMs from words, phrases or blocks of text. The vectors are more useful for new styles of machine learning, which seek to find meaning in data points.

This is useful, but development managers need to find skilled-up programmers and architects to make this leap forward. That is some of the feedback AWS says it’s getting from customers. Enter Swami Sivasubramanian.

Sivasubramanian is vice president of data and AI, at AWS. At re:Invent he told attendees: “Our customers use vectors for GenAI applications. They told us they want to use them in their existing databases so that they can eliminate the learning curve in terms of picking up a new programing paradigm, new tools, APIs and SDKs. Importantly, when your vectors and business data are stored in the same place, your applications will run faster and there is not data sync or data movement to worry about.”

Do you want to bring in a vector database to handle this work – adding to your relational databases, document databases, graph databases, and so on? AWS, which has used re:Invent after re:Invent to spotlight such new database offerings is shifting here to promote “run you vectors in your existing database” rather than bring in another new-fangled database.

So, central to AWS’s take is a push to provide vector data handling within existing Amazon databases, rather than standalone vector databases, although Amazon supports 3rd-party vector database integration as well.

Among many Amazon initiatives Sivasubramanian discussed at re:Invent 2023 were vector support for DocumentDB, DynamoDB, Aurora PostgreSQL, Amazon RDS for PostgreSQL, MemoryDB for Redis, Neptune Analytics, Amazon OpenSearch Serverless, and Amazon Bedrock.

The moment sets up a classic soup-to-nuts vendor vs. best-of-breed vendor paradigm. Among the best-of-breed database upstarts are Milvus, Pinecone, Zilliz and others.

Meanwhile, vector support has sounded as a drumroll for database makers of all ilk of late. Here is a small sampling. In September, IBM said it planned to integrate vector database capability into watsonx.data for use in retrieval augmented generation (RAG) use cases. Also in September, Oracle disclosed plans to add semantic search capabilities using AI vectors to Oracle Database 23c. On the heels of re:Invent, NoSQL stalwart MongoDB announced GA for MongoDB Atlas Vector Search. And, prior to re:Invent, Microsoft went public with vector database add-ons for Azure container apps.

Is AWS a diminishing AI laggard – or is it right about on time? No surprise here. The answer is somewhere in between the two extremes, just as it is somewhere between the poles on the soup-to-nuts-to-best-of-breed continuum. It will be interesting to see how the vector database market evolves. – Jack Vaughan

Effective A Teams and the paradigm of computing

November 27, 2023 By Jack Vaughan

 

Frame from Superman The Magnetic Telescope Fleischer Studios -1942

The paradigm of computing abides. There’s Input and Compute and Output and, with all certainty, Memory is crucial too. This is basic, but it has been enough to maintain attention and spur curiosity over a career. Overlaying this is the world and how this computer paradigm succeeds and/or fades in the raucous ecosystem of humankind.

This is writ a week after a heaping helping of raucous humankindness – after Generative AI had its Wild Knives-Out Weekend. This saw, to summarize, the firing and rehiring of CEO Sam Altman, also known as the Most Important Person for the Future of the World.

Too, this is writ by one who came to maturity as the powerful trains met: Better Living through Electricity encountered Do Not Bend, Fold, Spindle or Mutilate. Back in the day. The rise of automation and computerization raised concerns about dehumanization, yes. It was a concern of think tanks – as well as writers and readers, and film directors and movie audiences — in the 1950s and 1960s.

But there was tentative optimism too. One ironic twist: seers of the day worried about the future of an American Culture that would suddenly have too much leisure time. Anyone that has worked late to create a spreadsheet, toggle through the steps to reboot a printer, or fill out an online form must find some irony in that. Or anyone who noticed the cookies that follow them around and guess at their needs as they use the WWW.

So be it with some seers.

Of course, the basic blocks of computation get programmed. One result is the neural network, which in recent years has emerged steadily ‘from the lab.” Schools of programming and venture capital rise up around the simple compute blocks.

Funny but the neural network – now known as AI — has spawned new takes on old schools of thought. These are helpfully layered atop the technology with some commercial intent.

And, they vie in the market of ideas today. Under the leaky umbrellas of Effective Altruism and Effective Accelerationism, an odd take on the neural net has taken hold. It’s held that the neurals will achieve general intelligence that will push machines past humans. The Altruists with their concern that Sam Altman was moving too quickly to this precipice, lost a round last week to the Accelerators, in the Knives-Out Shoot-Out.

This in turn follows an effectively disruptive blow-out of blockchain and Web 3.0 technology at the hands EAff and EAcc, mostly due to the missteps of Sam Bankman-Fried, formerly Most Important Person for the Future of the World.

We need a good quick read on this topic and blogger and software engineer Molly White has published just such a piece. It’s the impetus for this brief essay. It’s not an on-the-one hand/on-the-other-hand type of essay White provides in “Effective Obfuscation.” Yet it is quite meritorious in my opinion. It’s a good tonic for the blues that hits you as you think of Mosaic co-inventor Mark Andreessen’s recent manifesto on Silicon Valley greatness. And a level-headed appreciation of just what happened last weekend.

Short-hand White synopsis: The “effective altruism” and “effective accelerationism” ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let’s try something else, she writes.

It’s my opinion that Fear of AI is overdone today mainly in the interest of the Hype Machine. A name for this was well-conjured by Molly White as “Effective Obfuscation” [My 2002 take on Obfuscation.] Concern over The Continuing Culture of Bend and Mutilate is real and needs to be addressed. But the neural network deserves better. – J.V.

Noting another worthy assessment here: The AI Doomers have lost the Battle – Benedikt Evans FT.com

Use cases ultimately pave Generative AI’s path: Face it!

October 22, 2023 By Jack Vaughan

Hadoop ExitsAndrew Ng’s online Stanford University machine learning classes serve as a gateway to understanding for many of today’s data scientists, and a discussion he led this summer at Stanford’s Graduate School of Business is noted here as extraordinary. It provides for a clear view of possible futures he offers tomorrow’s AI practitioners. They must, like most of us, wonder where all this is going.

Said Ng: “It feels like a bunch of us have been talking about AI for 15 years or something. But if you look at where the value of AI is today, a lot of it is still very concentrated in the consumer software internet. Once you get outside tech or consumer software internet, there’s some AI adoption – but it all feels very early.”

Andrew Ng stands out among the ranks of machine learning scientists, notable for research, entrepreneurship, and teaching. He helped form and lead the Google Brain Team, has helped redefine the world of machine vision, and done a stint building the neural net and machine learning efforts at Baidu.

THIS IS PART 2 OF 2. FOR PART 1, GO TO: Old Big Data Today – Or the clarion of shiny new thingness 

In 2014, he and his team at Google Brain published an influential paper on convolutional neural networks capable of supervised learning. Such supervised learning paved the way for today’s Generative AI.

“About 10, 15 years ago, my friends and I figured out a recipe for how to hire, say, 100 engineers to write one piece of software to serve more relevant ads, and apply that one piece of software to a billion users, and generate massive financial value,” he said, “But once you go outside consumer software internet, hardly anyone has 100 million or a billion users that you can … apply one piece of software to.”

A multibillion dollar blockbusting winner project a Google or Amazon could muster and accomplish is one thing, all else is another. That was a major context in the days of Big Data (2014-2019). It bears note: The similarity of the ‘Large’ in Large Language Model and the ‘Big’ in Big Data.

Ng’s groundbreaking Google work was followed by leading roles at AI venture funds and startups including AI Fund, Landing AI and DeepLearning.AI. The work there now entails a search for cost-effective use cases for the latest AI breakthroughs.

There are interesting projects to pursue but, he suggests, they don’t usually avail a type of return commensurate with the needed developer effort. From use case to use case, there’s work to do, and caveats to consider.

Ng has worked with consumer packaged food makers to better systematize cheese patterns on pizza. As big as that app may be, is not a “recipe for hiring a hundred or dozens of engineers.” The project value may be, for example, $5 million. He cited another perhaps typical AI brainstorm: To get wheat to grow straighter. Again, the return on the investment was not so favorable.

Then there is the cautionary moment. From Ng’s point of view, the work on use cases that can benefit from the tools of generative AI will also be marked by short-term fads along the way. Such a fad was Prisma Labs’ Lensa AI photo app, which turned selfies into professional looking digital art. That petered out like the ‘50s hula hoop. You can cite more of such. With generative AI, we’ve seen more than a bit of that already.

He does suggest the time and coding power needed to create the early-move AI apps, is shrinking — that Generative AI’s potential to streamline programming is crucial there. – Jack Vaughan

Worth noting: As one of the fathers of supervised learning, Ng naturally avows that this earlier discovery still has legs – that the bounty of supervised learning is still being mined for commercial effect. That may well have been missed in Wall Street’s mark-up of AI futures, and should not be ignored as Gen AI hype begins to damper lightly.

There’s a lot more to learn, in what ever modes we choose. I recommend Andrew Ng’s lecture on AI Opportunities.  [On YouTube.]

Orthogonal Sideshow: Investor and philosopher Nasim Taleb had recent comment of interest on LLM prompting process and entropy. Clicker beware: You are entering the realm of X.

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Progressive Gauge

Copyright © 2026 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack