• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Jack Vaughan

Effective A Teams and the paradigm of computing

November 27, 2023 By Jack Vaughan

 

Frame from Superman The Magnetic Telescope Fleischer Studios -1942

The paradigm of computing abides. There’s Input and Compute and Output and, with all certainty, Memory is crucial too. This is basic, but it has been enough to maintain attention and spur curiosity over a career. Overlaying this is the world and how this computer paradigm succeeds and/or fades in the raucous ecosystem of humankind.

This is writ a week after a heaping helping of raucous humankindness – after Generative AI had its Wild Knives-Out Weekend. This saw, to summarize, the firing and rehiring of CEO Sam Altman, also known as the Most Important Person for the Future of the World.

Too, this is writ by one who came to maturity as the powerful trains met: Better Living through Electricity encountered Do Not Bend, Fold, Spindle or Mutilate. Back in the day. The rise of automation and computerization raised concerns about dehumanization, yes. It was a concern of think tanks – as well as writers and readers, and film directors and movie audiences — in the 1950s and 1960s.

But there was tentative optimism too. One ironic twist: seers of the day worried about the future of an American Culture that would suddenly have too much leisure time. Anyone that has worked late to create a spreadsheet, toggle through the steps to reboot a printer, or fill out an online form must find some irony in that. Or anyone who noticed the cookies that follow them around and guess at their needs as they use the WWW.

So be it with some seers.

Of course, the basic blocks of computation get programmed. One result is the neural network, which in recent years has emerged steadily ‘from the lab.” Schools of programming and venture capital rise up around the simple compute blocks.

Funny but the neural network – now known as AI — has spawned new takes on old schools of thought. These are helpfully layered atop the technology with some commercial intent.

And, they vie in the market of ideas today. Under the leaky umbrellas of Effective Altruism and Effective Accelerationism, an odd take on the neural net has taken hold. It’s held that the neurals will achieve general intelligence that will push machines past humans. The Altruists with their concern that Sam Altman was moving too quickly to this precipice, lost a round last week to the Accelerators, in the Knives-Out Shoot-Out.

This in turn follows an effectively disruptive blow-out of blockchain and Web 3.0 technology at the hands EAff and EAcc, mostly due to the missteps of Sam Bankman-Fried, formerly Most Important Person for the Future of the World.

We need a good quick read on this topic and blogger and software engineer Molly White has published just such a piece. It’s the impetus for this brief essay. It’s not an on-the-one hand/on-the-other-hand type of essay White provides in “Effective Obfuscation.” Yet it is quite meritorious in my opinion. It’s a good tonic for the blues that hits you as you think of Mosaic co-inventor Mark Andreessen’s recent manifesto on Silicon Valley greatness. And a level-headed appreciation of just what happened last weekend.

Short-hand White synopsis: The “effective altruism” and “effective accelerationism” ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let’s try something else, she writes.

It’s my opinion that Fear of AI is overdone today mainly in the interest of the Hype Machine. A name for this was well-conjured by Molly White as “Effective Obfuscation” [My 2002 take on Obfuscation.] Concern over The Continuing Culture of Bend and Mutilate is real and needs to be addressed. But the neural network deserves better. – J.V.

Noting another worthy assessment here: The AI Doomers have lost the Battle – Benedikt Evans FT.com

Use cases ultimately pave Generative AI’s path: Face it!

October 22, 2023 By Jack Vaughan

Hadoop ExitsAndrew Ng’s online Stanford University machine learning classes serve as a gateway to understanding for many of today’s data scientists, and a discussion he led this summer at Stanford’s Graduate School of Business is noted here as extraordinary. It provides for a clear view of possible futures he offers tomorrow’s AI practitioners. They must, like most of us, wonder where all this is going.

Said Ng: “It feels like a bunch of us have been talking about AI for 15 years or something. But if you look at where the value of AI is today, a lot of it is still very concentrated in the consumer software internet. Once you get outside tech or consumer software internet, there’s some AI adoption – but it all feels very early.”

Andrew Ng stands out among the ranks of machine learning scientists, notable for research, entrepreneurship, and teaching. He helped form and lead the Google Brain Team, has helped redefine the world of machine vision, and done a stint building the neural net and machine learning efforts at Baidu.

THIS IS PART 2 OF 2. FOR PART 1, GO TO: Old Big Data Today – Or the clarion of shiny new thingness 

In 2014, he and his team at Google Brain published an influential paper on convolutional neural networks capable of supervised learning. Such supervised learning paved the way for today’s Generative AI.

“About 10, 15 years ago, my friends and I figured out a recipe for how to hire, say, 100 engineers to write one piece of software to serve more relevant ads, and apply that one piece of software to a billion users, and generate massive financial value,” he said, “But once you go outside consumer software internet, hardly anyone has 100 million or a billion users that you can … apply one piece of software to.”

A multibillion dollar blockbusting winner project a Google or Amazon could muster and accomplish is one thing, all else is another. That was a major context in the days of Big Data (2014-2019). It bears note: The similarity of the ‘Large’ in Large Language Model and the ‘Big’ in Big Data.

Ng’s groundbreaking Google work was followed by leading roles at AI venture funds and startups including AI Fund, Landing AI and DeepLearning.AI. The work there now entails a search for cost-effective use cases for the latest AI breakthroughs.

There are interesting projects to pursue but, he suggests, they don’t usually avail a type of return commensurate with the needed developer effort. From use case to use case, there’s work to do, and caveats to consider.

Ng has worked with consumer packaged food makers to better systematize cheese patterns on pizza. As big as that app may be, is not a “recipe for hiring a hundred or dozens of engineers.” The project value may be, for example, $5 million. He cited another perhaps typical AI brainstorm: To get wheat to grow straighter. Again, the return on the investment was not so favorable.

Then there is the cautionary moment. From Ng’s point of view, the work on use cases that can benefit from the tools of generative AI will also be marked by short-term fads along the way. Such a fad was Prisma Labs’ Lensa AI photo app, which turned selfies into professional looking digital art. That petered out like the ‘50s hula hoop. You can cite more of such. With generative AI, we’ve seen more than a bit of that already.

He does suggest the time and coding power needed to create the early-move AI apps, is shrinking — that Generative AI’s potential to streamline programming is crucial there. – Jack Vaughan

Worth noting: As one of the fathers of supervised learning, Ng naturally avows that this earlier discovery still has legs – that the bounty of supervised learning is still being mined for commercial effect. That may well have been missed in Wall Street’s mark-up of AI futures, and should not be ignored as Gen AI hype begins to damper lightly.

There’s a lot more to learn, in what ever modes we choose. I recommend Andrew Ng’s lecture on AI Opportunities.  [On YouTube.]

Orthogonal Sideshow: Investor and philosopher Nasim Taleb had recent comment of interest on LLM prompting process and entropy. Clicker beware: You are entering the realm of X.

Old Big Data Today – Or the clarion of shiny new thingness

October 21, 2023 By Jack Vaughan

Hadoop ExitsLLMs and Generative AI are the next steps forward for machine learning, the not-so-little engine that saved AI from the horror of technology irrelevance. Here, I note some similarities with today’s AI and yesterday’s Big Data – followed in a subsequent post by some observations from Andrew Ng, a machine learning pioneer, looking ahead to cost-effective use cases for the new tooling.

Similarity breeds comparison

A jaundiced view might hold that Genartive AI has taken over where Big Data left off. A rage a few years ago, Big Data  fled from the scene. Is it anywhere now on a Gartner representation of the life of hype?

Big Data leaders, after they redefined recommendation engines and  social media personalization, were often asked what Big Data was supposed to do next. The answer turned out to be “machine learning.” Flash forward to the present, and this has morphed into Large Language Models (LLMs) and prompt engineering.

There are plenty of differences between then and now. Let’s dwell on some similarities:

*As in the Big Data/Hadoop days of yesteryear, getting great gobs of custom data into the LLM is time consuming, labor intensive and error prone.

*The shiny new thingness may lure developers to chase the technology (which makes the resume sparkle) while short-changing the use case; that is, pursuing indefensible applications with a short and less-than-stellar commercial life spans.

*And, as with Big Data – and just about every innovation that has ever come about — what works as a prototype may fail to scale in  production. As well, what worked for a small army of Google sysadmins may not work for you, or prove saleable either.

*The first tooling is raw, and development can become a trudge of semi-blind trial and error.

*There is a megaton bomb of hyperbole that explodes, followed by hemming-hawing, nitpicking and numb lethargy.  See ‘Faded Love and Hadoop’.

These problems are familiar to innovators, but LLMs bring new classes of problems too. What some developers will find persistently annoying is a flakiness in interaction with the LLM. You can regularly prompt it with the same input, while getting different output.  I asked Google Bard about this and the answer was: “Overall, whether or not prompt engineering is fun is up to you.”

Of course, a great effort is underway, and development teams will soon benefit from both the successes achieved and failures endured. Among the questions that should direct their efforts: Does the technology solve a widely-found problem of significant weight? In our next post, let’s find out what Andrew Ng says! – Jack Vaughan

THIS IS PART 1 OF 2. FOR PART 2, GO TO  Use cases ultimately pave Generative AI’s path: Face it!

 

Nvidia and Cloudflare Deal; NYT on Human Feedback Loop

September 29, 2023 By Jack Vaughan

Cloudflare Powers Hyper-local AI inference with Nvidia – The triumph that is Nvidia these days can’t be overstated – although the wolves on Wall St. have sometimes tried. Still, Nvidia is a hardware company. Ok, let’s say Nvidia is still arguably a hardware company. Its chops are considerable, but essentially it’s all about the GPU.

Nvidia is ready to take on the very top ranks in computing. But, to do so, it needs more feet on the street. So, it is on the trail of such, as seen in a steady stream of alliances, partners and showcase customers.

That’s a backdrop to this week’s announcement that Cloudflare Inc will deploy Nvidia GPUs along with Nvidia Ethernet switch ASICs  at the Internet’s edge. The purpose is to enable AI inferencing, which is the runtime task that follows AI model training.

“AI inference on a network is going to be the sweet spot for many businesses,” Matthew Prince, CEO and co-founder, Cloudflare, said in a company release concerning the Cloudflare/Nvidia deal. Cloudflare said NVIDIA GPUs will be available for inference tasks in over 100 cities (or hyper-localities) by the end of 2023, and “nearly everywhere Cloudflare’s network extends by the end of 2024.”

CloudFlare has found expansive use in Web application acceleration, and could help Nvidia in its efforts to capitalize on GPU technology’s use in the amber fields of next-wave generative AI applications.

With such alliances, all Nvidia has to do is keep punching out those GPUs – and development tools for model building.

***  ***  ***  ***

NYT on Human Feedback – The dirty little secret in the rise of machine learning was labeling. Labeling can be human-labor intensive, time-consuming and expensive. It harkens back to the days when ‘computer’ was a human job title, and filing index cards was a way of life.

Amazon’s Mechanical Turk – a crowdsourcing marketplace amusedly named after the 18th Century chess-playing machine “automaton” that was actually powered by a chess master hidden inside the apparatus — is still a very common way to label machine learning data.

Labeling doesn’t go away as Generative AI happens. As the world delves into what Generative AI is, it turns out that human labelers are a pretty significant part.

That was borne out by some of the research I did in the summer for “LLMs, generative AI loom large for MLOps practices” for SDxCentral.com. Sources for the story also discussed how “reinforcement learning through human feedback” was needed for the Large Language Models underpinning Generative AI.

The cost of reinforcement learning, which makes sure things are working, is more than a small part of the sticker shock C-suite execs are experiencing with Generative AI.

Like everything, improvement may come to the process. Sources suggest retrieval augmented generation (RAG) is generally less labor intensive than data labeling. RAG retrieves info from an external database and provides it to the model “as-is.”

RAG is meant to address one of ChatGPT’s and Generative AI’s most disturbing traits: It can make a false claim with amazingly smug confidence. Humans have to keep a check on it.

But the build out of RAG requires some super smarts.  As we have come to see, many of today’s headline AI acquisitions are as much about gaining personnel with advanced know-how as they are about gaining some software code or tool. This type of smarts comes at a high price, just as the world’s most powerful GPUs do.

This train of thought is impelled by a recent piece by Cade Metz for the New York Times. “The Secret Ingredient of ChatGPT Is Human Advice” considers reinforcement learning from human feedback, which is said to drive much of the development of artificial intelligence across the industry. “More than any other advance, it has transformed chatbots from a curiosity into mainstream technology,” Metz writes of human feedback aligned with Generative AI.

Metz’s capable piece discusses the role that expert humans are playing in making Generative AI work, with some implication that we should get non-experts involved, too.  In response to the story, one Twitter wag suggested that Expert Systems are making a comeback. If so, guess we will have to make-do with more expert humans until “The Real AI” comes along! – Jack Vaughan

 

Oppenheimer at Harvard

August 28, 2023 By Jack Vaughan

Looking back today at “American Prometheus” – the book upon which this summer’s widely noted Oppenheimer film is based. Recalling I fashioned a mini tour/book review covering Oppenheimer’s Cambridge as I originally read Kai Bird’s and Martin J Sherwin’s 2005 book.

Yes, it’s still summer, so, I am sharing it here! With some editing. Editing is ever with us.

Oppenheimer’s tragedy truly is an American tragedy, and it is too little known. Worth noting: The creation of the Atom bomb is the ultimate tale of science and technology for bad and good. It redefined life for the generations that followed.

At the start, in his college days, the leader of the team of scientists that created the first A-bomb was a delicate mesh of scientist and poet. In the end, he was a heart-broken figure, done in by his lethal invention, and his soft-spot for arty friends who, steeped in the ethos of their times, promoted liberal and communist causes.

Oppenheimer did not have his roots in Boston, but he did pass through here, like so many others. It was in the air in Boston/Cambridge as much as anywhere: the mechanical, fluid and electronic sounds of a military-industrial complex based on the scientific breakthroughs and technical innovations of the mid-20th Century.

The son of a wealthy West Side New York clothier, Oppenheimer refused the fellowship Harvard offered him when he entered the university in 1922.Oppenheimer began his Harvard days as a chemistry student.

The chemist had been the epitome of the scientist – but that was changing just as he was entering college. He was not looking for a lucrative career. Oppenheimer worried his future would be that of an industrial chemist, testing toothpastes. But physics was uncovering wonder after wonder. He read prodigiously. His tenure at Harvard preceded construction of the Mallinckrodt Lab, so his chemistry studies were like in the basement of University Hall in Harvard Yard.

He looked to take as many advanced physics classes as he possibly could. He didn’t have the basic courses. But he read five science books a week. And he was picking physics texts unknown to the typical student. American Prometheus authors report that one physics professor, reviewing Oppy’s petition [replete with a list of texts he’d read] to take graduate classes, remarked: “Obviously, if he says he’s read these books, he’s a liar, but he should get a PH.D. for knowing their titles.” He was brash and precocious.

The famous figures of science and math [in which Oppenheimer thought himself deficient] passed through Harvard’s gates. Oppenheimer attended lectures by Whitehead and Bohr. Still, he nurtured a love for literature. He was a great polymath. He read The Waste Land, and wrote poetry of sadness and loneliness. He edited a school literary journal known as The Gad-Fly [under the auspices of the Liberal Club at 66 Winthrop St]. After Harvard, he discovered Proust.

He kept much to himself. Had but a few friends. “His diet often consisted of little more than chocolate, beer and artichokes. Lunch was often just a ‘black and tan’ – a piece of toast slathered with peanut butter and topped with chocolate syrup.” When he lived in Cambridge, like so many other great scientific thinkers in so many places, he took to long walks. He lived for a while at 60 Mount Auburn Street.

A mentor at Harvard could well have been future Nobelist Percy Bridgman. Oppenheimer admired a strain in this physicist noted for his studies of materials at high temperatures and pressures and his openness to imaginatively approaching the philosophy of science.

“Oppy’s” outsider status at Harvard could be laid to his sensitivity, but just as significant if not more so was his Jewish heritage. He came to the school at a time when its head was considering a quota system to reduce the growing number of Jewish entrants. Surely, the straight road to Harvard success was not fully open to him, even if that is what he’d desired. He was offered a graduate teaching position but turned it down.

Oppenheimer graduated from Harvard in three years. He wrote a friend: “Even in the last stages of senile aphasia I will not say that education, in an academic sense, was only secondary when I was at college. I plough through about five or ten big scientific books a week, and pretend to research. Even if, in the end, I’ve got to satisfy myself with testing toothpaste, I don’t want to know it till it has happened.”

From Harvard he went on to study in Gottingen in Germany, Thomson’s famed Cavendish Lab, CalTech, Berkeley, and, after the War, Princeton. Surely the Jewish Ethical Culture School he attended as a lad, which had a summer school adjunct in New Mexico, and the mesas of New Mexico, where he placed the crucial workings of the Manhattan Project, was most formative.

He and his friends skipped the Harvard commencement to drink lab alcohol in a dorm room. He had one drink and retired.

Other books on this topic worth noting include “Now It Can Be Told: The Story of the Manhattan Project” by Leslie Groves and, most particularly, “The Making of the Atomic Bomb” by Richard Rhodes.

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Go to page 8
  • Go to page 9
  • Interim pages omitted …
  • Go to page 16
  • Go to Next Page »

Progressive Gauge

Copyright © 2026 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack