• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Development

Large models cooling

July 16, 2023 By Jack Vaughan

Molecular Sampler – The week just passed brought news of a combined MIT/IBM team suggesting a less compute-intensive route to AI-driven materials science.  The group said it used a subset of a larger data pool to predict molecular properties. The use case has gained attention in both ML and quantum computing circles – where a drive to speed material development and drug discovery could lead to cost savings, better health outcomes and yet-to-be-imagined innovations.

Like most AI advances of late, the work gains inspiration from NLP techniques. The methods used to predict molecular properties tap into “grammar rule production,” which by now has a long lineage. There are 1 followed by 100 zeros of ways to combine atoms, which is to say grammar rule production for materials is a big job, and that style of computation is daunting and may not be immediately exploited.

Because the grammar rule production process is too difficult even for large-scale modern computing, the research team put its efforts into preparatory paring of data, a short-cut technique that goes back to the beginning of time. Some notes from the MIT information office:

“In language theory, one generates words, sentences, or paragraphs based on a set of grammar rules. You can think of a molecular grammar the same way. It is a set of production rules that dictate how to generate molecules or polymers by combining atoms and substructures.

“The MIT team created a machine-learning system that automatically learns the “language” of molecules — what is known as a molecular grammar — using only a small, domain-specific dataset. It uses this grammar to construct viable molecules and predict their properties.

As I read it, the MIT-IBM team have come up with a simulation sampler approach. The ‘smaller corpus’ approach is much explored these days as implementers try to take some of the ‘Large’ out of Large Language Models. One may always wonder if such synthesis ultimately can gain true results. I trust an army better qualified will dig into the details of the sampling technique used here over the weekend.

***  ***  ***  ***

ChatGPT damper – The signs continue to point to a welcome damper on ChatGPT (AI) boosterism – now that each deadline journalist in the world has asked the bot to write up a global heatwave story or Met red-carpet opening story in the style of Hemingway or Mailer or another.

Among the signals of cooling:

*There’s investor Adam Coons. The Chief Portfolio Manager at Winthrop Capital Management said AI on Wall Street will continue but then fade as a hot button.

For a stock market that has endorsed Mega cap growth stocks for their ChatGPT chops, it has become a FOMO trade. “In the near term that trade will continue to work. There’s enough investors still willing to chase that narrative,” he told Reuters. On the other hand, Coons and Winthrop Capital are cautious on it, as the hyperbole has obscured the true potential. He said:

“We are moving away from the AI narrative. We think that there’s still too much to be shown. Particularly [with] Nvidia, we think the growth figures that are being priced into that stock just don’t make sense. And there’s just really not enough proof statements from a monetization standpoint behind what AI can really do within the tech sector.”

*There’s Pincecone COO Bob Wiederhold speaking at VB Transform – Pinecone is in the forefront of up-surging Vector Databases that appear to have a special place in formative LLM applications. Still, Wiederhold sees need for a realistic approach to commercializing the phenomenon.

His comments as described by Matt Marshall on VentureBeat:

Wiederhold acknowledged that the generative AI market is going through a hype cycle and that it will soon hit a “trough of reality” as developers move on from prototyping applications that have no ability to go into production. He said this is a good thing for the industry as it will separate the real production-ready, impactful applications from the “fluff” of prototyped applications that currently make up the majority of experimentation.

*There’s Rob Hirschfeld commentary “Are LLMs Leading DevOps Into a Tech Debt Trap?” on DevOps.com – Hirschfeld is concerned with the technical debt generative AI LLMs could heap onto today’s DevOps crews, which are already awash in quickly built, inefficiently engineered Patch Hell. Code generation is often the second-cited LLM use case (after direct-mail and press releases).

Figuring out an original developer’s intent has always been the cursed task of those who maintain our innovations – but LLM’s has the potential to bring on a new mass of mute code fragments contrived from LLM web whacks. Things could go from worse to worser, all the rosy pictures of no-code LLM case studies notwithstanding. Hirschfeld, who is CEO at infrastructure consultancy  RackN, writes:

Since they are unbounded, they will cheerfully use the knowledge to churn out terabytes of functionally correct but bespoke code…It’s easy to imagine a future where LLMs crank out DevOps scripts 10x faster. We will be supercharging our ability to produce complex, untested automation at a pace never seen before! On the surface, this seems like a huge productivity boost because we (mistakenly) see our job as focused on producing scripts instead of working systems…But we already have an overabundance of duplicated and difficult-to-support automation. This ever-expanding surface of technical debt is one of the major reasons that ITOps teams are mired in complexity and are forever underwater.

News is about sudden change. Generative AI, ChatGPT and LLMs brought that in spades. It is all a breathless rush right now, and analysis can wait. But, the limelight on generative AI is slightly dimmed. That is good because what is real will be easier to see. Importantly, reporters and others are now asking those probing follow-up questions like: “How much how soon?”

It’s almost enough to draw an old-time skeptical examiner into the fray. – Jack Vaughan

 

Adage

“Future users of large data banks must be protected from having to know how the data is organized in the machine….” E.F. Codd in A Relational Model of Data for Large Shared Data Banks

Noted Passing: Henry Petroski, technology historian who studied failures in engineering

July 4, 2023 By Jack Vaughan

Henry Petroski’s early focus was on the ideas and experience of civil engineering, but surely he became influential to all types of engineers over a long public career. He died June 14 in Durham, N.C. at 81.

As a Duke University professor, Petroski looked closely at the art and science of engineering, and I think he came up with some very meaningful conclusions. Studying the history of failures in rockets, buildings, bridges and the like was his special pursuit. His books include “To Engineer is Human,” “The Evolution of Useful Things,” and “The Pencil.”

My take-away from seeing him lecture and appear on TV, and from reading his books and Scientific American articles was this:

Styles of engineering come into use, formulated by individuals who learn first principles from (often painful) failures. Then the style becomes taken for granted. Successive engineer generations push the  barriers of the basic style, and mistakes are made that are sometimes deadly.

Among the object lessons in engineering failures Petroski would often cite were the failure of the elevated skywalks at a Kansa City Hyatt Regency hotel, the collapse of the Tacoma Narrows Bridge in Washington State, the collapse of the Twin Towers of the World Trade Center in New York as the result of deliberate terrorist air crashes,  and the loss of two NASA space shuttles.

His writing could take the form of excessively fine-grained pedanticism – I couldn’t forge through “The Pencil” history. Still, thanks to Petroski I did learn that its history was much about finding the right combination of graphite and clay – and I continue to study the pencils I sharpen with particular attention.

Interesting to learn in the New York Times obituary of Petroski’s childhood recollection: Making towers and bridges out of pantry cans and boxes. Guess he caught the analytical bug early!

I had the opportunity to interview Petroski very briefly after he spoke to a hall of software engineers at the OOPSLA Conference in Tampa in 2001. This was just a few weeks after 9-11, when the airways had just reopened, and a pretty tense time for travel. I recall that he was open to our questions. The assembled object-oriented programming crowd was enthralled, and the questions in the scrum after his speech were questions without clear answers at that time. Engineers ask questions, and wonder, especially about catastrophic events.

I don’t find raw notes on that long-ago interview, but mark here that, when the new World Trade Center was built, there was far more use of concrete that is less pervious to conflagration.

I did find a write-up I did for Application Development Trends on Petroski at OOPSLA. I will include a bit of that and a link. I tried to draw engineering principles from his studies that might apply to software architecture issues of the day, though that bit sounds weak. There were things to be afeared of in the burgeoning architecture of web services – but not the ones I imagined/predicted in 2001.

I can’t read this piece without thinking of my indebtedness to the great crew at ADT, led by the late Mike Bucken, who gave me so many opportunities to damn the torpedoes and get something interesting out there to our readers. The list of editors that would let me run the headline “There’s No Success Like Failure” is pretty short. – Jack Vaughan

 

From “There’s No Success Like Failure” on adtmag.com

If you interviewed a system designer who admitted to his or her list of failures in design, you would probably begin plotting ways to end the meeting and get to the next job candidate, wouldn’t you? You probably wouldn’t consider hiring the person.

 

An obsession with failure could be a problem, but a modicum of fear of failure—a respect for the phenomena that can undo a design—may be healthy in a designer or developer. Maybe you should hear out a job candidate who is capable of analytically discussing a failed project or two.

 

If you are wary of this advice, I don’t blame you, but you might be more inclined to follow it if you were to hear from Henry Petroski. This Duke University professor of history and civil engineering spoke at last fall’s OOPSLA Conference in Tampa, Fla. In a kick-off keynote address, Petroski discussed success and failure in design throughout history, concluding that there is a unique interrelationship between the two.

 

“All materials are flexible if slender enough,” asserts Petroski, who noted that designers in bridge design tend to go toward the sleek and aesthetic as they get further away in time from the first principles. The Tacoma Narrows Bridge breakup of 1940 stands—well it doesn’t exactly stand, it fell—as a testament to Petroski’s assertion.

Scholar Petroski took his OOPSLA audience back to ancient Rome to make his point. He discussed Vitruvius. That author of key architecture texts went to great length to consider failures of stone-and-axle variations (that’s how they moved pillars) of the day. Vitruvius suggests that following a successful design to an ultimate conclusion is not the way to proceed.

The big ships, many failed, of the era of European exploration came in for consideration. “Ships made of wood were scaled up, every dimension doubled,” said Petroski. “At a certain size, they would break in two.”

Petroski noted that nature does not design this way (to blindly scale up); the leg bones of large and small animals are not exactly proportional. Cable stay bridges are now the rage in bridge design, noted Petroski. Their design is becoming increasingly ambitious, he added, and some failure may be in store.

“Failures in bridge style seem to repeat in 30-year [intervals],” said Petroski. “Engineers are ambitious. Everyone wants to build the largest bridge in the world. Cable stay bridges are exhibiting problems.”

Whenever the envelope is pushed, he indicated, “there is opportunity for phenomena to manifest that were not obvious in the small.”

Failure could be generational, said Petroski; when engineers start to work with new design paradigms they take great care. Then as things get familiar, they forget about the fundamentals and they push, sometimes beyond the real design limits.

RELATED
Petrowski speaks – YouTube
Obituary – New York Times
Read the rest of “There’s No Success Like Failure” – ADTmag.com Jan 2002.

Kreps, Dorsey, riff on ChatGPT

May 29, 2023 By Jack Vaughan

Stagg Field Nuclear Pile [fragment]

[Boston — May 2023] — “It’s a law that any conversation around technology has to come back to AI within five minutes.”

Well put, Jay Kreps, co-founder and CEO for real-time streaming juggernaut Confluent. Speaking at J.P. Morgan’s Boston Tech Investor event, Kreps knew this was coming. ChatGPT rules the news these days.

Given the daily pounding of 1,000 reporters’ laptops, given Nvidia’s vault into the highest clouds of valuation, it is no surprise that ChatGPT generative AI is the recurring topic. It will impede all other discussion, just as expected by tech stalwarts at J.P. Morgan’s and others’ tech events.

It’s the 600-lb. ChatBot in the room, and it is bigger than big.

Confluent chief on Chatbot interaction

Back in the nascent days of social media, the founders of Confluent, then working at LinkedIn, created a distributed commit log that stored streams of records. They called it Kafka and grew it out into a fuller data stream processing system. It’s intent is to bring to the broader enterprise real-time messaging capabilities akin to that of the Flash Boys of Wall Street.

The company is still in “spend a buck to make a buck” mode. For the quarter ending March 31, Confluent revenues increased 38% to $174.3M, while net jumped 35% to $152.6M. Customers include Dominos, Humana, Lowes, Michelin and others. In January it purchased would-be competitor, Immerok, a leading contributor to the Apache Flink stream processing project.

What’s the significance of real-time streaming in “the age of AI,” Kreps is asked at the Boston event. He says:

It’s really about how a company can take something like a large language model that has a very general model of the world and combine it with information about that company, and about customers, and be able to put those things together to do something for the business.

He gives an example: A large travel company wants to have an interactive chatbot for customers. Seems the barrier ChatGPT faces there for improvements is not so high. As Kreps said: “The chatbots were always pretty bad. It’s like interacting with like the stupidest person that you’ve ever talked to.”

Improvements needed for chatbots include a real-time view of all the information the company holds about customers and operations.

What do you need to make that work? Well, you need to have the real-time view of all the information about them, their flights, their bookings, their hotel, are they going to make their connection, etcetera. And you need a large language model which can take that information and answer arbitrary questions that the customer might ask. So the architecture for them is actually very simple. They need to put together this real time view of their customers, what’s happening, where the flights are, what’s delayed what’s going on. And then they need to be able to call out to a service for the generative AI stuff, feed it this data, feed it the questions from customers, and … integrate that into their service, which is very significant. This is a whole new way of interacting with their customers. And I think that that pattern is very generalizable.

Popping the question: Dorsey

For Jack Dorsey, the question “What about ChatGTP?” is raw meat. He melded SMS and the Web to create Twitter, and now with a nod to bitcoin and block chain has built Block, nee Square. The financial services and digital payments company posted revenue results for the three months ended April 1 that increased 26% to $4.99B, while net loss decreased a significant 92% to $16.8M. The good news was based on increased use of its Cash App product.

At the J.P. Morgan tech investor conference, Dorsey told the people, while hype obviously abounds, true progress rides on use cases.

There’s a ton of hype right now. And I think there’s a lot of companies being started that are going to fail because of that hype. I think the technology industry is very trendy, and very fashionable and jumps from one thing to the next, to the next, to the next. It wasn’t so long ago that we were only talking about Bored Apes and Crypto and NFTs and now we’re talking only about AI and how it’s going to kill us.

There’s always some truth in all these things. I just would caution any company that’s approaching it from a technology perspective, [to] instead use a use case perspective. What is the use case you’re trying to solve? And what technologies can you use to solve it more creatively?

THAT’S THE WAY IT IS — Clearly, panelists and podiumists are preparing to take on ChatGPT questions. At the same time, the clamor of the now will shift to prioritizing generative AI strategically within a host of technology initiatives. ChatGPT may be generalizable — but the proof will not appear overnight. The proof is in the business use case.

“Mythical Man-Month” Author Frederick Brooks, at 91

December 20, 2022 By Jack Vaughan

[Dec 20, 2022] – Noting here the passing at 91 last month of Frederick Brooks, director of some of IBM’s most important mainframe-era programming projects. He was a key figure in establishing the idea that software projects should be intelligently engineered and organized.

He helped as much as anyone to move the mysterious art of tinkering with computer code toward a profession capable of repeatable results. “The Mythical Man-Month,” his 1975 distillation of years of development management, became a common reference work in many a developer’s desk library.

La Brea Tar Pits – Huntington Library.

Working at IBM in the 1950s and 1960s, and spearheading development of the vaunted IBM/360, Brooks gave a lie to notions that were bedrock in hardware-software projects, and came up with a few notable inventions as well.

Especially, he is credited with IBM’s decision to settle on an eight-bit byte. This allowed the systems to handle both text and numerals. Strange to think there was a time when machines were dedicated either to text handling or numerical calculation, but it was so!

He oversaw the development of systems that could be offered with an expandable range of processor and memory equipment at different price points, thus entering the development era of “platform” over “product.”

Brooks studied and found some surprising truths about complex software and projects – the most telling: That projects slow down at a greater rate when leaders add people as a project gets closer to completion.

He also saw the dangerous lure technology offers in the form of “The Silver Bullet” that promises a sudden tech- or organizational-style breakthrough.

With these and other observations, Brooks help build a philosophical underpinning for structured analysis, a school of thinking that held sway for software during an era of big projects marked especially by NASA’s Apollo program.

Ed Yourdon, Ivar Jacobsen, Tom DeMarco and others would take Brooks’ work into the 1990s. Like him, they realized it’s not just about “the code” – that the culture of the organization can play a more dominant role.

Brooks paved the path forward with emphasis on requirements gathering. But he foresaw the tarpit that beckoned with any search for the greatest schema perfection ahead of actually getting the project going.

He conveyed this without embracing the extreme that says “Fail Fast,” and he did it as always with a measure of humor. It’s right there in the title for one of “The Mythical Man-Month” chapters: “Plan to Throw One Away.”

Some of Brooks’ musings do echo another era. I don’t think we have Man Months anymore – mythical or other. Unquestionably too, team programmers have gained much more responsibility over the years, so Brooks’ emphasis on the manager wears thin [Heck, a whole era of development sprang from gritty JavaScript developers who found a way around obstacles their managers took for granted as their ‘lot in life.’]

But managers, at the end of the day, bear the greatest responsibility for the software project. Technology acumen is just a table stake. Their communications and organization skills must be stellar, as Brooks indicates when he writes:

The Tower of Babel was perhaps the first engineering fiasco, but it was not the last. Communication and its consequent, organization, are critical for success. The techniques of communication and organization demand from the manager much thought and as much experienced confidence as the software technology itself. – From The Mythical Man-Month.

Is software engineering really a profession? The question will continue to be asked, and Brooks work will likely ever be part of that discussion.

Coda: My days as a software project manager were brief – about a year. [After which my colleagues welcomed me back to editorial and told me they thought I’d been crazy ever to leave.] What I learned building web sites was that, no matter what you think your problem is, it is probably a project management problem. I owe that to Brooks. As expressed in his thoughtful and very often bemused writings, Brooks’ thinking on the topic informed my and many others’ efforts to ‘ship the darn thing’. – Jack Vaughan, 2022

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack