• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

AI

AI, Neural Researchers Gain 2024 Nobel for Physics

October 8, 2024 By Jack Vaughan

Updated – The Royal Swedish Academy of Sciences today announced that neural network science pioneers John Hopfield and Geoffrey Hinton will be awarded the Nobel Prize in Physics for 2024. The two researchers are cited for “foundational discoveries and inventions that enable machine learning with artificial neural networks.”

The Nobel award is a capstone of sorts for two premier researchers in neural networks. This computer technology has gained global attention in recent years as the underpinning engine for large-scale advances in computer vision, language processing, prediction and human-machine interaction.

John Hopfield is best known for efforts to re-create interconnect models of the human brain. At the California Institute of Technology in the 1980s, he developed foundational concepts for neural network models.

These efforts led to what became known as a Hopfield network architecture. In effect, these are simulations that work as associative memory models that can process, store and retrieve patterns based on partial and imprecise information inputs.

Geoffrey Hinton, now a professor of computer science at the University of Toronto, began studies in the 1970s on neural networks mimicking human brain activity. Such reinterpretations of the brain’s interconnectedness found gradually greater use in pattern recognition through many years, leading to extensive use today in small- and large-arrays of semiconductors.

In the 1980s, Hinton and fellow researchers focused efforts on backpropagation algorithms for training neural networks and, later, so-called deep learning networks which are at the heart of today’s generative AI models. As implemented on computer chips from Nvidia and others, these models are now anticipated to drive breakthrough innovations in a wide variety of scientific and business fields. Hinton has been vocal in concerns about the future course of AI, and how it may have detrimental effects. He recently left a consulting position at Google, so he could speak more freely about his concerns.

Control at issue

What’s now known as artificial intelligent in the form of neural networks began to take shape in the 1950s. Research in both neural networks and competitive expert system AI approaches saw a decline in the 1990s as disappointing results accompanied the end of the Soviet Union and the Cold War, which had been a big driver of funds.

This period was known as the “AI Winter.” Hinton’s and Hopfield’s work was key in carrying the neural efforts forward until advances in language and imaging processing inspired new interest in the neural approaches.

Drawbacks still track the neural movement – most notably the frequent difficulty researchers face in understanding the work going on within a “black box’ of neural networks.

Both Hinton and Hopfield addressed the issues of AI in society during Nobel Prize-related press conferences. The unknown limits of AI capabilities – particularly a possible future superintelligence – have been widely discussed, and were echoed in reporters’ questions and Nobelists’ replies.

“We have no experience of what it’s like to have things smarter than us,” Hinton told the presser assembled by the Nobel committee. It’s going to be wonderful in many respects, in areas like healthcare. But we also have to worry about a number of possible bad consequences. Particularly the threat of these things getting out of control.”

Hopfield, by Zoom, echoed those concerns, discussing neural-based AI at an event honoring his accomplishment at Princeton.

Taking a somewhat evolutionary perspective, he said: “I worry about anything which says ‘I’m big. I’m fast. I’m faster than you are. I’m bigger than you are, and I can also outrun you. Now can you peacefully inhabit with me?’ I don’t know.”

But is it physics?

As implemented on computer chips from Nvidia and others, the neural models are anticipated to drive breakthrough innovations in a wide variety of scientific and business fields.

Both Hinton’s and Hopfield’s work spanned fields like biology, physics, computer science, and neuroscience. The multidisciplinary nature of much of this work required Nobel judges to clamber and jimmy and ultimately fit the neural network accomplishments into the Physics category.

“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” said Ellen Moons, Chair of the Nobel Committee for physics in a statement. – JV

 

shown above Picture: TRW neural network based pattern recognition, 1990s

Random Notes: Pining for Blackwell, GPT 5

September 2, 2024 By Jack Vaughan

Happy Labor Day 2024 to Workers of the World!
Nvidia hits bumps in overdrive – That Wall Street meme is about to be cresting. A flaw in its Blackwell production plan is just that, we are assured. In a newsletter followup to a Jensen Huang earnings report interview as described by Bloomberg’s Ed Ludlow and Ian King:

Nvidia had to make a change to the design’s lithography mask. This is the template used to burn the lined patterns that make up circuits onto the materials deposited on a disk of silicon. Those circuits are what gives the chip the ability to crunch data.

At the least it is a reminder of the elemental fact that the course of semiconductor manufacturing does not always run smooth. As David Lee reminds on Bloomberg: Hardware is hard. Elemental facts are the first casualties in bull markets and technology hype cycles.

Even if the Gods of Uncertainty are kind, the educated consumer will allow that “Blackwell will be capacity constrained,” as quite ably depicted in Beth Kindig’s recent Forbes posting.

~~~~~~~~~~~~~

GPT 5, hurry fast! – This Blackwell Boding is marked with a rumored re-capitalizing of Open AI. And that with concerns about the delivery of GPT 5. Where is GPT 5? asks Platformonomics. In his Aug 30 edition of Platformonomics TGIF, Charles Fitzgerald bullet-points the reasons to be doubting that GPT 5 can round the bend in time. Possible explanations include:

*GPT-5 is just late — new scale brings new challenges to surmount

*It took time to get that much hardware in place

*Scaling has plateaued

*The organizational chaos at Open AI had consequences

*Open AI is doing more than just another scaling turn of the crank with GPT-5?

The skeptical examiner wonders if Open AI’s valuation wont edge down a bit, even though it is too big to fail and headed by the smartest man in the world. At the least, again, one has to observe the water level as it declines in Open AI’s moat.

~~~~~~~~~~~~~

Nunc ad aliquid omnino diversum

Deep Sea Learning – The Chicxulub event doomed 75 percent of Earth’s species. Details of the devastation were gathered by long core tubes drilled into the seafloor by the JOIDES Resolution ship now to be retired. It was a punch in the gut said a scientist.

Benthic foraminifera from Deep Sea off New Zealand.

In extra Innings

Danny Jansen in Superposition –  Plays for both teams in same game. In June he was at bat for the Blue Jays in Fenway when a storm stopped the game. Later, he was traded. In August the game was resumed, and he was now a catcher for the Red Sox. “Jays beat Red Sox 4-1, and Jansen shows up on both sides of box score – an MLB first!”

 

 

Cadence discusses AI-driven fit-for-purpose chips

May 22, 2024 By Jack Vaughan

Phonautograph

The era of hyperscalers designing their own fit-for-purpose chips began with a series of AI chips from Google, Microsoft and AWS. Others cite the work of Apple and others to forge their own destinies in custom-chip designs for smartphones.

The trend has continued, but it is not clear when or if it will spread to other ranks among Information Technology vendors.

The chips specifically built to run the big players’ finely honed AI algorithms are, for now, the sweet spot for fit-for-purpose.

The surge in interest in in-house chip designs got rolling a few years ago with Google’s Tensor Processing Unit, which is specially tuned to meet Google’s AI architecture. The search giant has followed that with the Argos chip, meant to speed YouTube video transcoding, and Axion, said to drive data center energy efficiencies.

Chip design for Google and its ilk is enabled by deep pockets of money. The big players have ready mass markets that can justify big expenses in IC design staff and resources as well.

Chief among those resources is Electronic Design Automation tooling from the likes of Cadence Systems.  This week, Anirudh Devgan, President & CEO of Cadence, discussed the trend at the J.P. Morgan 52nd Global Technology, Media and Communications Conference in Boston.

He said the key reasons companies go the self-designed route are: Achieving domain-specific product differentiation, gaining control over the supply chain and production schedule, and realizing cost benefits at scale when their chip shows it will find use at sufficient volume.

Domain-specific differentiation allows companies to a create chip tailored to their unique needs, according to Devgan.

“It’s a domain specific product. It can do something a regular standard product cannot do,” he said, pointing to Tesla’s work on chips for Full Self Driving, and phone makers’ mobile computing devices that run all day on a single battery charge.

Like all companies dependent on components to power new products, the big players want to have assurance they can meet schedules, and an in-house chip design capability can help there, Devgan continued.

“You have some schedule, you want some control over that,” he told the JP Morgan conference attendees.

For the in-house design to work economically, scale of market is crucial. AI’s apparent boundless opportunity works for the hyperscalers here.

In the end, their in-house designed chip may cost less, when they cut the big chip maker’s over-size role out of the cost equation.

Where does this work? As always…”it depends.”

“It depends on each application, how much it costs, but definitely in AI there is volume, and volume is growing,” Devgan said, and he went on to cite mobile phones, laptops and autos as areas where the volume will drive the trend of custom chip creation.

Devgan declined to estimate how much system houses will take on the task of chip design going forward. Cadence wins in either case, by selling tools to semiconductor manufacturers, hyperscaling cloud leaders and  system houses.

He said: “We will leave that for the customer and the market to decide. Our job is to support both fully, and we are glad to do that.”

The trend bears watching. Years of technology progress has been based on system houses and their customers working with standard parts. Trends like in-house chip design may have the momentum to drastically rejigger today’s IT vendor Who’s Who, which has already been thoroughly rearranged in the wake of the cloud and the web. -jv

OpenAI GPT-4o Lands with Mini Thud or: Generative AI balances Hype and Reality in Chatbot Market Quest

May 19, 2024 By Jack Vaughan

It’s still too early to gauge Generative AI’s limits. That is another way of saying a circus atmosphere of hyperbole and demo theatrics is far from played out. The word “plateau” is heard today, and maybe a leveling off is only natural.

But now the uncertain space of ‘what it can’t do yet’ is mined each day. If Generative AI efforts plateau, and it merely changes the chatbot market as we know it, Generative AI will go down as a really big large-language disappointment.

This week’s OpenAI rollout of GPT-4o didn’t help. One can’t blame the OpenAI crew for trying their best to present awe inspiring on-stage demos as they saddle onto Danish Modern furniture set that bodes a comforting future. After all, there’s need to show their labs’ work is world changing or — barring that — fun.

The upstart’s was one among other episodes in the week’s AI Wars, as OpenAI’s demo was joined by Google I/O’s product roll outs on another stage in another corner of the Web, reported by Sean Michael Kerner and others.

For their part, the OpenAI crew walked through pet tricks, such as, asking the applications to translate “Why do we do linear equations?” into sparkling Italian.

Google’s show was just as breathless.

Yes, for OpenAI, the free app is a step into a new realm. (Although, as George Lawton points out, “free” is always an onion to unpeel.) And, yes, it vastly surpasses a voice-to-text demo of the 1990s. Does it move the bar much further than the Smart Speaker did in the mid-2010s? Let an army of pundits ponder this.

Our take: OpenAI’s announcement of something akin to a free-tier product was a bit short on awe. We’d second Sharon Goldman of Fortune who marked GPT-4o as “OpenAI’s emotive on steroids voice assistant.”

Of course, more accessible and easier entry for a wider range of people is OpenAI’s ticket to broadening into consumer markets. That’s where the killer app that justifies big valuations may be. Gain the consumer, and the enterprise follows.

That’s where OpenAI will meet the public and duke it out with friends like AWS, Google and Microsoft. There’s Apple too, which is likely now prepping spirited demos that show it has heard the bubbling cries of drowning users of Siri.

The next battle will be different than what has come before for heavily financed OpenAI. This stage in the technology’s evolution brings the OpenAI boffins down from the high ground, They say “hello whitebread-light demo patter” — just like Google, AWS or Microsoft product managers!

For a company that’s gained outsized attention in big headline deals for crucial infrastructure for big cloud players, it’s time to move toward apps. If it is to gain ground on a big scale, it will have to reach consumers. We take that as a less than nuanced theme in the GPT-4o roll out.

Cousin IoT: Brave New World Update

As if by chance, we sat in on Transforma’s report on IoT markets this very same week. While ably detailing the currents and eddies of IoT in the decades to come It seemed to convey a message relevant to Gen AI’s future course.

It’s been a long time since IoT first promised a brave new technology future — and such promises were never quite on the scale of Generative AI — but IoT has been grinding away gainfully, nevertheless.

IoT industry players have faced the same kind of existential challenge that GenAI is about to encounter. That is: The need to find a killer consumer app that it can power.

Transforma’s recent survey reports that there were 16.1 billion active IoT devices at the end of 2033. Annual device sales will grow from 4.1 billion in 2023 to 8.7 billion (a CAGR of 8%).

Yet, the world — even the industrial market within the bigger world — seems little changed. IoT’s top use cases, now and looking forward, fall short in terms of the energetic dynamism represented in early visions of IoT that looked more like StarTrek or the Jetsons.

Transforma looking toward the top three IoT use cases in 2033 cited 1- electronic labels; 2-building lights; and 3-headphones. You can come knocking because the van is not rocking, at least in terms of excitement. Still, these use cases represent real businesses.

Now, the assertion here — that Generative AI will be viewed in the future much as IoT is viewed today — is tentative. The agreement here is likely inexact … but may be useful for predicasts. Finally, my purpose here is not to put-down these young technologists’ efforts, but just to suggest that OpenAI and underlying Generative AI are in for a tough fight. — Jack Vaughan

Who took my soapbox? A note on media and AI

January 7, 2024 By Jack Vaughan

As 2023 came to its end, a New York Times suit affirmed a general impression that Generative AI and ChatGPT would find some friction on the way to a well-hyped, lead-pipe cinch and especially glorious future.

For those that have used this software, a recent improvement on existing machine learning interaction, it is not surprising. Microsoft’s/OpenAI’s ChatGPT and its main competitor, Google Bard, are breakthroughs. They provide a different level of access to the world’s knowledge.

Instead of pointing the searcher to brief fair-use citations of Web stories ala Google Search, ChatGPT and Bard provide somewhat thoughtful summaries of issues — ones that might do a junior or middle-level manager quite well when it’s time for yearly performance evaluations.

The new paradigm for Web activity threatens beleaguered publishers. They are not on a roll. The 4th estate is now painted as an unwanted gate keeper of opinion. Publishers that saw an advertising market pulverized by Google Search results now see an AI wunderkind about to drain publishing’s last pennies.

An anticipated slew of AI suits is now spearheaded by the Times, which filed a lawsuit citing OpenAI and Microsoft for copyright breach. Some go tsk, tsk. Wall Street oddsmakers that enjoyed an AI stock bump in 2023 were quickest to dismiss the Times’ chances versus ChatGPT. OpenAI has said it is in discussions with some publishers, and will work to achieve a beneficial arrangement.

Among the financial community, concern spreads that Generative AI’s magical abilities could be dampered. That is summed up by Danny Cevallos, MSNBC legal analyst, who worries about the impossible obligation to mechanize copyright royalties for AI citations across the globe.

The concerns comes despite the multidecade success of Silicon Valley’s Altruistic Surveillance movement. It can find you wherever you are, web users know. Still, Cevallos highlights the difficulty of, for example, finding and paying a copyright owner in a log cabin somewhere in Alaska.

“That would mean the end of future AI,” he said on CNBC’s Power Lunch. “It could be argued that the Times has to lose for progress to survive.”

We can anticipate that glory-bound Generative AI will find some rocks in its pathway in 2024 — but most will be in the form of stubborn, familiar IT implementation challenges. In the meantime, people that make a living in media will have to work to promote their interests, as other commercial interests chip away under the cover of AI progress. – Jack Vaughan

— 30 —

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack