• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

AI

VP Vance runs AI scrimmage – takes on EU bureaucrats

March 5, 2025 By Jack Vaughan

JD Vance last month dressed down the Euros at Grand Palis AI Summit. Here, an old tech hand reckons with memories of AI policy efforts. Sees surprising devolution.

By Jack Vaughan

Booted footsteps in the hall at night. Coming closer as in an old radio drama — but real. The steps still resound in corners of Europe.  Where some memories of oppression are hard-wired.

The bootsteps might be KGB, Gestapo or Stasi. These were secret police, compiling dossiers and worse.  The US has had its secret agencies tracking its citizens. [Read more…] about VP Vance runs AI scrimmage – takes on EU bureaucrats

Brief exploration: Generative AI circa 2025

December 10, 2024 By Jack Vaughan

Complexities in deploying Gen AI and LLMs dim the light on some initial hype. These are the days of engineering, coordination, and integration.

By Jack Vaughan

The early procession of ‘2025 Outlooks’ seems to start with a lot of looks back. It’s mostly about Generative AI, which has proved to be a stock market mover, stock art maker, and stock item in years in review.

The song remains the same but the tenor may change. ChatGPT is two years old, and its market shaking meteoric rise now feels a bit less meteoric, if only because changing the world requires effort.

[Read more…] about Brief exploration: Generative AI circa 2025

AI, Neural Researchers Gain 2024 Nobel for Physics

October 8, 2024 By Jack Vaughan

Updated – The Royal Swedish Academy of Sciences today announced that neural network science pioneers John Hopfield and Geoffrey Hinton will be awarded the Nobel Prize in Physics for 2024. The two researchers are cited for “foundational discoveries and inventions that enable machine learning with artificial neural networks.”

The Nobel award is a capstone of sorts for two premier researchers in neural networks. This computer technology has gained global attention in recent years as the underpinning engine for large-scale advances in computer vision, language processing, prediction and human-machine interaction.

John Hopfield is best known for efforts to re-create interconnect models of the human brain. At the California Institute of Technology in the 1980s, he developed foundational concepts for neural network models.

These efforts led to what became known as a Hopfield network architecture. In effect, these are simulations that work as associative memory models that can process, store and retrieve patterns based on partial and imprecise information inputs.

Geoffrey Hinton, now a professor of computer science at the University of Toronto, began studies in the 1970s on neural networks mimicking human brain activity. Such reinterpretations of the brain’s interconnectedness found gradually greater use in pattern recognition through many years, leading to extensive use today in small- and large-arrays of semiconductors.

In the 1980s, Hinton and fellow researchers focused efforts on backpropagation algorithms for training neural networks and, later, so-called deep learning networks which are at the heart of today’s generative AI models. As implemented on computer chips from Nvidia and others, these models are now anticipated to drive breakthrough innovations in a wide variety of scientific and business fields. Hinton has been vocal in concerns about the future course of AI, and how it may have detrimental effects. He recently left a consulting position at Google, so he could speak more freely about his concerns.

Control at issue

What’s now known as artificial intelligent in the form of neural networks began to take shape in the 1950s. Research in both neural networks and competitive expert system AI approaches saw a decline in the 1990s as disappointing results accompanied the end of the Soviet Union and the Cold War, which had been a big driver of funds.

This period was known as the “AI Winter.” Hinton’s and Hopfield’s work was key in carrying the neural efforts forward until advances in language and imaging processing inspired new interest in the neural approaches.

Drawbacks still track the neural movement – most notably the frequent difficulty researchers face in understanding the work going on within a “black box’ of neural networks.

Both Hinton and Hopfield addressed the issues of AI in society during Nobel Prize-related press conferences. The unknown limits of AI capabilities – particularly a possible future superintelligence – have been widely discussed, and were echoed in reporters’ questions and Nobelists’ replies.

“We have no experience of what it’s like to have things smarter than us,” Hinton told the presser assembled by the Nobel committee. It’s going to be wonderful in many respects, in areas like healthcare. But we also have to worry about a number of possible bad consequences. Particularly the threat of these things getting out of control.”

Hopfield, by Zoom, echoed those concerns, discussing neural-based AI at an event honoring his accomplishment at Princeton.

Taking a somewhat evolutionary perspective, he said: “I worry about anything which says ‘I’m big. I’m fast. I’m faster than you are. I’m bigger than you are, and I can also outrun you. Now can you peacefully inhabit with me?’ I don’t know.”

But is it physics?

As implemented on computer chips from Nvidia and others, the neural models are anticipated to drive breakthrough innovations in a wide variety of scientific and business fields.

Both Hinton’s and Hopfield’s work spanned fields like biology, physics, computer science, and neuroscience. The multidisciplinary nature of much of this work required Nobel judges to clamber and jimmy and ultimately fit the neural network accomplishments into the Physics category.

“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” said Ellen Moons, Chair of the Nobel Committee for physics in a statement. – JV

 

shown above Picture: TRW neural network based pattern recognition, 1990s

Random Notes: Pining for Blackwell, GPT 5

September 2, 2024 By Jack Vaughan

Happy Labor Day 2024 to Workers of the World!
Nvidia hits bumps in overdrive – That Wall Street meme is about to be cresting. A flaw in its Blackwell production plan is just that, we are assured. In a newsletter followup to a Jensen Huang earnings report interview as described by Bloomberg’s Ed Ludlow and Ian King:

Nvidia had to make a change to the design’s lithography mask. This is the template used to burn the lined patterns that make up circuits onto the materials deposited on a disk of silicon. Those circuits are what gives the chip the ability to crunch data.

At the least it is a reminder of the elemental fact that the course of semiconductor manufacturing does not always run smooth. As David Lee reminds on Bloomberg: Hardware is hard. Elemental facts are the first casualties in bull markets and technology hype cycles.

Even if the Gods of Uncertainty are kind, the educated consumer will allow that “Blackwell will be capacity constrained,” as quite ably depicted in Beth Kindig’s recent Forbes posting.

~~~~~~~~~~~~~

GPT 5, hurry fast! – This Blackwell Boding is marked with a rumored re-capitalizing of Open AI. And that with concerns about the delivery of GPT 5. Where is GPT 5? asks Platformonomics. In his Aug 30 edition of Platformonomics TGIF, Charles Fitzgerald bullet-points the reasons to be doubting that GPT 5 can round the bend in time. Possible explanations include:

*GPT-5 is just late — new scale brings new challenges to surmount

*It took time to get that much hardware in place

*Scaling has plateaued

*The organizational chaos at Open AI had consequences

*Open AI is doing more than just another scaling turn of the crank with GPT-5?

The skeptical examiner wonders if Open AI’s valuation wont edge down a bit, even though it is too big to fail and headed by the smartest man in the world. At the least, again, one has to observe the water level as it declines in Open AI’s moat.

~~~~~~~~~~~~~

Nunc ad aliquid omnino diversum

Deep Sea Learning – The Chicxulub event doomed 75 percent of Earth’s species. Details of the devastation were gathered by long core tubes drilled into the seafloor by the JOIDES Resolution ship now to be retired. It was a punch in the gut said a scientist.

Benthic foraminifera from Deep Sea off New Zealand.

In extra Innings

Danny Jansen in Superposition –  Plays for both teams in same game. In June he was at bat for the Blue Jays in Fenway when a storm stopped the game. Later, he was traded. In August the game was resumed, and he was now a catcher for the Red Sox. “Jays beat Red Sox 4-1, and Jansen shows up on both sides of box score – an MLB first!”

 

 

Cadence discusses AI-driven fit-for-purpose chips

May 22, 2024 By Jack Vaughan

Phonautograph

The era of hyperscalers designing their own fit-for-purpose chips began with a series of AI chips from Google, Microsoft and AWS. Others cite the work of Apple and others to forge their own destinies in custom-chip designs for smartphones.

The trend has continued, but it is not clear when or if it will spread to other ranks among Information Technology vendors.

The chips specifically built to run the big players’ finely honed AI algorithms are, for now, the sweet spot for fit-for-purpose.

The surge in interest in in-house chip designs got rolling a few years ago with Google’s Tensor Processing Unit, which is specially tuned to meet Google’s AI architecture. The search giant has followed that with the Argos chip, meant to speed YouTube video transcoding, and Axion, said to drive data center energy efficiencies.

Chip design for Google and its ilk is enabled by deep pockets of money. The big players have ready mass markets that can justify big expenses in IC design staff and resources as well.

Chief among those resources is Electronic Design Automation tooling from the likes of Cadence Systems.  This week, Anirudh Devgan, President & CEO of Cadence, discussed the trend at the J.P. Morgan 52nd Global Technology, Media and Communications Conference in Boston.

He said the key reasons companies go the self-designed route are: Achieving domain-specific product differentiation, gaining control over the supply chain and production schedule, and realizing cost benefits at scale when their chip shows it will find use at sufficient volume.

Domain-specific differentiation allows companies to a create chip tailored to their unique needs, according to Devgan.

“It’s a domain specific product. It can do something a regular standard product cannot do,” he said, pointing to Tesla’s work on chips for Full Self Driving, and phone makers’ mobile computing devices that run all day on a single battery charge.

Like all companies dependent on components to power new products, the big players want to have assurance they can meet schedules, and an in-house chip design capability can help there, Devgan continued.

“You have some schedule, you want some control over that,” he told the JP Morgan conference attendees.

For the in-house design to work economically, scale of market is crucial. AI’s apparent boundless opportunity works for the hyperscalers here.

In the end, their in-house designed chip may cost less, when they cut the big chip maker’s over-size role out of the cost equation.

Where does this work? As always…”it depends.”

“It depends on each application, how much it costs, but definitely in AI there is volume, and volume is growing,” Devgan said, and he went on to cite mobile phones, laptops and autos as areas where the volume will drive the trend of custom chip creation.

Devgan declined to estimate how much system houses will take on the task of chip design going forward. Cadence wins in either case, by selling tools to semiconductor manufacturers, hyperscaling cloud leaders and  system houses.

He said: “We will leave that for the customer and the market to decide. Our job is to support both fully, and we are glad to do that.”

The trend bears watching. Years of technology progress has been based on system houses and their customers working with standard parts. Trends like in-house chip design may have the momentum to drastically rejigger today’s IT vendor Who’s Who, which has already been thoroughly rearranged in the wake of the cloud and the web. -jv

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Progressive Gauge

Copyright © 2026 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack