• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

March of Analogies and AI neural networks

April 21, 2025 By Jack Vaughan

Hopfield circuit

Analogies provide us the tools to explore, discover, and understand invention, and  to communicate about the invention itself. Draft horses of old still stand fast as such an analogy.

In the 19th Century, James Watt estimated that a strong dray horse could lift 33,000 pounds by one foot in one minute. That provided a comparative measure for the steam engine, and it carries right through to the engines under the hoods of today’s F1 speedsters and NASCAR racers.

While a commonly accepted measure of AI performance is still developing for ChatGPT and other Generative AI systems, there is an analogy at work, and it lies in the neural firings of the brain.

Lift the hood on Generative AI and you are looking at the neural network, which is an equivalent electrical circuit model of workings of the brain. It is an equation or algorithm that is rendered in software. The software runs– these days – on a GPU (graphical processor unit).

The neural net analogy gained the sanction of the academy with the award of the Nobel Prize for Physics (2024). That went to John J. Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Both men formulated neural networks in the 1980s, that built on a near-half century of previous work.

The Nobelists would be first to admit the limits of analogies are always evident, although the rapid rise and hype of Generative AI –now called “Agentic AI” – obscures that from the general public.

It’s necessary to simplify, say, the processes of an animal neuron in order to make an algorithm out of it. That’s a necessary simplification, but it’s required shortcuts bear remembering when analyzing various AI futures.

That point came across strongly as I recently listened to an episode of the New Books in Science podcast with Grace Lindsay. The topic was the fertile but real gap between living-breathing biology and math models of the mind.

Lindsay heads the Lindsay Lab for machine learning research at NYU and authored “Models of the Mind” (2021), a history-packed discourse on the ascent of machine learning.

As Lindsay noted, the analogy that underlies today’s neural networks goes back at least as far as the 1780s.

“Electricity is kind of the way that neurons send signals within the cell itself. And the study of electricity and the early study of physiology and biology are very intertwined. People were discovering principles of electricity, how to control electricity, and at the same time, they were discovering principles of biology and how organisms work,” she said, marking early examples of math used to model and understand the nervous system.

She referred to Galvani’s experiments in animal electricity. Think: dead frogs on marble slabs attached to crude electricity generators.

She said: “In the 1700s, people were understanding how electricity could create movement in animals and to be able to observe electricity actually at play in the nervous system.”

The electric moment of thought is a spike of electric energy. In her words, the leaky integrated fire model.

This, she said, stands as the “analogy where people start to think of a neuron as equivalent to a little electrical circuit, a little Resistor-Capacitor circuit.”

That ability to make that connection is really fruitful, she continued in the podcast, “because now you have this little thing that you can describe mathematically.”

“People had developed mathematics to understand electrical circuits. You can use those same terms and the same kind of parameters to characterize what a neuron is doing.”

“If you want to model the way that an individual neuron sends its own signal at spike, you’re using equivalent circuit models at their node, where you’re using the language of electrical engineering, basically to describe the function of a neuron, how it takes in electrical inputs and produces electrical changes in its membrane as a result.”

This is a simplification that arose from the Neuron Model work of Warren McCullough and Walter Pitts in the 1940s. McCullough strip down a nerve’s work processes to the bare bones to, isolate the spike of thought. He consciously ignored other complexities in the process to get to that model, a model his young daughter could depict in crayons to illustrate McCullough’s  and Pitts’ “A Logical Calculus of the Ideas Immanent in Nervous Activity.”

That work was expanded upon, leading to pivotal models like the Perceptron in the 1950s, backpropagation algorithms and convolutional networks for image processing in the 1980s and 2010s, and Deep Mind deep  reinforcement learning also in the 2010s. This all lead to last year’s Nobel.

Bold strokes of progress aside, it’s good to be mindful that the neural network is at heart a simplified model.

Yes, it has performed some astounding work in its latest incarnation.

But, while analogies are valuable tools for communication and initial understanding of technology, we must be aware of their limitations as they seek ever-more complex application. We don’t expect function-for-function transfer from human to machine, but the gap between the two bears minding, especially in a rapidly evolving field like AI.

Afterword: The New Books in Science podcast spreads so wide a net that, I’d say, “it’s not for everyone.” But if you can find interest in the flight paths of birds, the evolution of immune systems or the night life of glowworms, you may want to check it out. https://newbooksnetwork.com/category/science-technology/science

top o’ page: Hopfield Circuit

Filed Under: Computing

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack