• Skip to primary navigation
  • Skip to main content
Progressive Gauge

Progressive Gauge

Media and Research

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack
  • Show Search
Hide Search

Computing

PASCAL language originator Nicholas Wirth, 89

January 12, 2024 By Jack Vaughan


Nicholas Wirth, whose 1970 creation of the PASCAL programming language dramatically influenced the development of modern software engineering techniques, died January 1. He was 89.

A long-time professor of Informatics at the ETH Institute in Zurich, Wirth produced PASCAL primarily as a teaching tool, but its innovative language constructs set the stage for C, C++ and Java software languages that flourished in the late 20th Century, and which still have wide use today. Wirth was the winner of the 1984 ACM Turing Award, among other honors.

Wirth named the language after 15th Century philosopher and mathematician, Blaise Pascal, who is often credited as the inventor of the first digital calculator. The PASCAL language was a bridge of sorts, as it attempted to span language styles for business computing by the then-dominant COBOL language, and scientific computing as seen with the FORTRAN language.

The field of software engineering was still in its early days when Wirth began his pioneering work as a graduate student and then as assistant professor at Stanford University. At the time, the status of early hardware implementations from numerical calculators to general-purpose computers was still somewhat nascent.

It was becoming apparent this time that a language focused solely on numerical processing would encounter obstacles as computing evolved.

It was hoped, Wirth wrote in a paper, that “the undesirable canyon between scientific and commercial programming … could be bridged.” [This and other materials cited in this story appeared in ”Recollections about the Development of PASCAL,” published as part of “History of Programming Languages,” in 1996 by Addison-Wesley.]

Wirth also contributed to development of the ALGOL and MODULA languages, often bringing special focus to development of compilers that translated source software into running machine code. His book, “Algorithms + Data Structures = Programs” is often cited as a keystone text for those interested in the fundamentals of software design.

Education was the primary goal of work on PASCAL, which Wirth undertook after a disappointing experience on a somewhat squabbling committee of experts looking to standardize a new version of ALGOL 60.

In his words, Wirth decided “to pursue my original goal of designing a general-purpose language without the heavy constraints imposed by the necessity of finding a consensus among two dozen experts about each and every little detail.”

The work of Wirth and some co-conspirators built on ALGOL, but also drew implicitly on emerging thinking on structured approaches to software development, such as those outlined by E.W. Dijkstra. Certainly, the early ‘bubble gum and bailing wire’ days of computer software were receding as Wirth began his work.

“Structured programming and stepwise refinement marked the beginnings of a methodology of programming, and became a cornerstone in helping program design become a subject of intellectual respectability,” according to Wirth.

The fact that PASCAL was designed as a teaching tool is important – it was constructed in a way that allowed new programmers to learn sound method, while applying their own enhancements.

“Lacking adequate tools, build your own,” he wrote.

And, while innovation was a goal, pragmatism for Wirth was also a high-order requirement.

Wirth’s work on PASCAL started about the time he headed from Stanford to ETH Zurich, with definition of the language achieved in 1970. It was formally announced in “Communications of the ACM” in 1971. Though never the most popular language in terms of numbers, it was very influential, contributing to a movement that emphasized general applications use, strong typing and human-readable code.

PASCAL gained great currency in the early days of personal computers in the United States. A version that became known as Turbo Pascal was created by Anders Hejlsberg, and was licensed and sold by Borland International beginning in 1983. On the wings of Borland ads in Byte magazine, Turbo PASCAL became ubiquitous among desktop programmer communities.

Work of pioneers like Wirth gain special resonance today as a Generative AI paradigm appears poised to automate larger portions of the programming endeavor. Automated code generation is by no means completely new, and the surprises and ‘gotchas’ of the pioneers’ era will no doubt be revisited as the understanding of effective software development processes continues to evolve. Wirth’s words and work merit attention in this regard, and in regard as well to fuller understanding of software evolution. – J.V.

The Progressive Gauge obituary gets some extra time now, with a few minutes of extra material.

My Take: I came to cover software for embedded systems, electronic design automation and, then, business applications, at a time when structured programming and defined methodologies were in full ascendance.
Methodologies narrowed over many years into a general accepted path to software modeling. But they tended to flower wildly at first. They started with some philosophic underpinnings, but could easily be charged as well as something that worked for someone that they thought could become a product. The downside was each new methodology might suggest that your problem was the methodology you were already using, which was not always the actual case. The joke of a bench developer I shared a cube with for a while was “What’s a methodology?” Answer: “A method that went to college.”

The methodology generally carried the name of the inventor, not a historical figure as with PASCAL.

The complexity of the deepest levels of the new operating system, compiler, design and language approaches was daunting for this writer.

My work in the computer trade press afforded me the opportunity to meet Unix co-creator Dennis Ritchie of Bell Labs (at the time he was rolling out the Plan 9 OS)and Java author James Gosling (then of Sun Microsystems). Met two of the three “UML Amigos,” Grady Booch, then of Rational Software, and Ivar Jacobson, head of Ivar Jacobson International. Interviewing Ivar on one occasion was particularly memorable. He asked that I interview a software avatar he had just created, an animated figure which spoke and represented his thinking on technology issues, rather than speak with him directly.

These assignments surely were ‘a tall order for a short guy!’

I offer these notes here in my role as a generalist. While computer history doesn’t repeat, it rhymes. This history is always worth a re-visit, especially as clearly-new paradigms fly out of coders’ 2024 work cubes. It has been interesting for me to look at the origination of PASCAL, and to learn about Nicholas Wirth, the individual who brought PASCAL into the world.

Pascaline
Pascaline

Links
https://cacm.acm.org/magazines/2021/3/250705-50-years-of-pascal/fulltext
https://www.standardpascaline.org/PascalP.html
https://amturing.acm.org/award_winners/wirth_1025774.cfm
http://PASCAL.hansotten.com/niklaus-wirth/recollections-about-the-development-of-PASCAL/
https://blogs.embarcadero.com/50-years-of-pascal-and-delphi-is-in-power/

Is AWS a diminishing AI laggard – or is it right about on time?

December 12, 2023 By Jack Vaughan

Harvard Stadium

AWS is lagging and racing to catch up in Generative AI and Large Language Models (LLMs). Or so an industry meme holds.  When a smattering of new COVID  isolations end and the dust settles in the weeks after Amazon’s re:Invent 2023 conference in Las Vegas, that notion may be due for a revision.

Like all its competitors, AWS is working to put Generative AI technology in place – that means latching it on to other application offerings and adapting new tools and schemes for developers.

Among challenges that now face teams creating Generative AI applications are vector embeddings. These processes are an important step in handling data for consumption by Large Language Models (LLMs) that betoken a new era of chatbots. Perhaps as importantly, vector embeddings are also useful in slightly less futuristic applications, such as search, recommendation engines and personalization engines.

When Wall Street wags ask whether AWS is a diminishing AI laggard or peaking at just the right time, they probably don’t devote too much thought to the types of vectors machine learning engines are now churning. But building such “infrastructure” is important on the path to working AI.

AWS put vector techniques front and center in AI and data announcements at re:Invent 2023. A centerpiece of this is Amazon Titan Multimodal Embeddings, just out. The software works to convert images and short text into numerical representations that generative learning models can use. These are used to unpack the semantic meanings of data,  and to uncover important relations between data points.

Putting new-gen AI chatbots aside for the moment, it’s worth mentioning that recommendation and personalization tasks are likely beneficiaries of vector and AI progress. Once the province of Magnificent 7 Class vendors, these application types have become part of more and more organizations’ systems portfolios.

As you may imagine, they add considerable complexity to a developer’s typical day.  Here, Amazon AWS has set a course to simplify such work for customers.

Before some words on that, a few words about these kinds of embeddings: Vector embeddings are numerical representations created by LLMs from words, phrases or blocks of text. The vectors are more useful for new styles of machine learning, which seek to find meaning in data points.

This is useful, but development managers need to find skilled-up programmers and architects to make this leap forward. That is some of the feedback AWS says it’s getting from customers. Enter Swami Sivasubramanian.

Sivasubramanian is vice president of data and AI, at AWS. At re:Invent he told attendees: “Our customers use vectors for GenAI applications. They told us they want to use them in their existing databases so that they can eliminate the learning curve in terms of picking up a new programing paradigm, new tools, APIs and SDKs. Importantly, when your vectors and business data are stored in the same place, your applications will run faster and there is not data sync or data movement to worry about.”

Do you want to bring in a vector database to handle this work – adding to your relational databases, document databases, graph databases, and so on? AWS, which has used re:Invent after re:Invent to spotlight such new database offerings is shifting here to promote “run you vectors in your existing database” rather than bring in another new-fangled database.

So, central to AWS’s take is a push to provide vector data handling within existing Amazon databases, rather than standalone vector databases, although Amazon supports 3rd-party vector database integration as well.

Among many Amazon initiatives Sivasubramanian discussed at re:Invent 2023 were vector support for DocumentDB, DynamoDB, Aurora PostgreSQL, Amazon RDS for PostgreSQL, MemoryDB for Redis, Neptune Analytics, Amazon OpenSearch Serverless, and Amazon Bedrock.

The moment sets up a classic soup-to-nuts vendor vs. best-of-breed vendor paradigm. Among the best-of-breed database upstarts are Milvus, Pinecone, Zilliz and others.

Meanwhile, vector support has sounded as a drumroll for database makers of all ilk of late. Here is a small sampling. In September, IBM said it planned to integrate vector database capability into watsonx.data for use in retrieval augmented generation (RAG) use cases. Also in September, Oracle disclosed plans to add semantic search capabilities using AI vectors to Oracle Database 23c. On the heels of re:Invent, NoSQL stalwart MongoDB announced GA for MongoDB Atlas Vector Search. And, prior to re:Invent, Microsoft went public with vector database add-ons for Azure container apps.

Is AWS a diminishing AI laggard – or is it right about on time? No surprise here. The answer is somewhere in between the two extremes, just as it is somewhere between the poles on the soup-to-nuts-to-best-of-breed continuum. It will be interesting to see how the vector database market evolves. – Jack Vaughan

Effective A Teams and the paradigm of computing

November 27, 2023 By Jack Vaughan

 

Frame from Superman The Magnetic Telescope Fleischer Studios -1942

The paradigm of computing abides. There’s Input and Compute and Output and, with all certainty, Memory is crucial too. This is basic, but it has been enough to maintain attention and spur curiosity over a career. Overlaying this is the world and how this computer paradigm succeeds and/or fades in the raucous ecosystem of humankind.

This is writ a week after a heaping helping of raucous humankindness – after Generative AI had its Wild Knives-Out Weekend. This saw, to summarize, the firing and rehiring of CEO Sam Altman, also known as the Most Important Person for the Future of the World.

Too, this is writ by one who came to maturity as the powerful trains met: Better Living through Electricity encountered Do Not Bend, Fold, Spindle or Mutilate. Back in the day. The rise of automation and computerization raised concerns about dehumanization, yes. It was a concern of think tanks – as well as writers and readers, and film directors and movie audiences — in the 1950s and 1960s.

But there was tentative optimism too. One ironic twist: seers of the day worried about the future of an American Culture that would suddenly have too much leisure time. Anyone that has worked late to create a spreadsheet, toggle through the steps to reboot a printer, or fill out an online form must find some irony in that. Or anyone who noticed the cookies that follow them around and guess at their needs as they use the WWW.

So be it with some seers.

Of course, the basic blocks of computation get programmed. One result is the neural network, which in recent years has emerged steadily ‘from the lab.” Schools of programming and venture capital rise up around the simple compute blocks.

Funny but the neural network – now known as AI — has spawned new takes on old schools of thought. These are helpfully layered atop the technology with some commercial intent.

And, they vie in the market of ideas today. Under the leaky umbrellas of Effective Altruism and Effective Accelerationism, an odd take on the neural net has taken hold. It’s held that the neurals will achieve general intelligence that will push machines past humans. The Altruists with their concern that Sam Altman was moving too quickly to this precipice, lost a round last week to the Accelerators, in the Knives-Out Shoot-Out.

This in turn follows an effectively disruptive blow-out of blockchain and Web 3.0 technology at the hands EAff and EAcc, mostly due to the missteps of Sam Bankman-Fried, formerly Most Important Person for the Future of the World.

We need a good quick read on this topic and blogger and software engineer Molly White has published just such a piece. It’s the impetus for this brief essay. It’s not an on-the-one hand/on-the-other-hand type of essay White provides in “Effective Obfuscation.” Yet it is quite meritorious in my opinion. It’s a good tonic for the blues that hits you as you think of Mosaic co-inventor Mark Andreessen’s recent manifesto on Silicon Valley greatness. And a level-headed appreciation of just what happened last weekend.

Short-hand White synopsis: The “effective altruism” and “effective accelerationism” ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let’s try something else, she writes.

It’s my opinion that Fear of AI is overdone today mainly in the interest of the Hype Machine. A name for this was well-conjured by Molly White as “Effective Obfuscation” [My 2002 take on Obfuscation.] Concern over The Continuing Culture of Bend and Mutilate is real and needs to be addressed. But the neural network deserves better. – J.V.

Noting another worthy assessment here: The AI Doomers have lost the Battle – Benedikt Evans FT.com

Nvidia and Cloudflare Deal; NYT on Human Feedback Loop

September 29, 2023 By Jack Vaughan

Cloudflare Powers Hyper-local AI inference with Nvidia – The triumph that is Nvidia these days can’t be overstated – although the wolves on Wall St. have sometimes tried. Still, Nvidia is a hardware company. Ok, let’s say Nvidia is still arguably a hardware company. Its chops are considerable, but essentially it’s all about the GPU.

Nvidia is ready to take on the very top ranks in computing. But, to do so, it needs more feet on the street. So, it is on the trail of such, as seen in a steady stream of alliances, partners and showcase customers.

That’s a backdrop to this week’s announcement that Cloudflare Inc will deploy Nvidia GPUs along with Nvidia Ethernet switch ASICs  at the Internet’s edge. The purpose is to enable AI inferencing, which is the runtime task that follows AI model training.

“AI inference on a network is going to be the sweet spot for many businesses,” Matthew Prince, CEO and co-founder, Cloudflare, said in a company release concerning the Cloudflare/Nvidia deal. Cloudflare said NVIDIA GPUs will be available for inference tasks in over 100 cities (or hyper-localities) by the end of 2023, and “nearly everywhere Cloudflare’s network extends by the end of 2024.”

CloudFlare has found expansive use in Web application acceleration, and could help Nvidia in its efforts to capitalize on GPU technology’s use in the amber fields of next-wave generative AI applications.

With such alliances, all Nvidia has to do is keep punching out those GPUs – and development tools for model building.

***  ***  ***  ***

NYT on Human Feedback – The dirty little secret in the rise of machine learning was labeling. Labeling can be human-labor intensive, time-consuming and expensive. It harkens back to the days when ‘computer’ was a human job title, and filing index cards was a way of life.

Amazon’s Mechanical Turk – a crowdsourcing marketplace amusedly named after the 18th Century chess-playing machine “automaton” that was actually powered by a chess master hidden inside the apparatus — is still a very common way to label machine learning data.

Labeling doesn’t go away as Generative AI happens. As the world delves into what Generative AI is, it turns out that human labelers are a pretty significant part.

That was borne out by some of the research I did in the summer for “LLMs, generative AI loom large for MLOps practices” for SDxCentral.com. Sources for the story also discussed how “reinforcement learning through human feedback” was needed for the Large Language Models underpinning Generative AI.

The cost of reinforcement learning, which makes sure things are working, is more than a small part of the sticker shock C-suite execs are experiencing with Generative AI.

Like everything, improvement may come to the process. Sources suggest retrieval augmented generation (RAG) is generally less labor intensive than data labeling. RAG retrieves info from an external database and provides it to the model “as-is.”

RAG is meant to address one of ChatGPT’s and Generative AI’s most disturbing traits: It can make a false claim with amazingly smug confidence. Humans have to keep a check on it.

But the build out of RAG requires some super smarts.  As we have come to see, many of today’s headline AI acquisitions are as much about gaining personnel with advanced know-how as they are about gaining some software code or tool. This type of smarts comes at a high price, just as the world’s most powerful GPUs do.

This train of thought is impelled by a recent piece by Cade Metz for the New York Times. “The Secret Ingredient of ChatGPT Is Human Advice” considers reinforcement learning from human feedback, which is said to drive much of the development of artificial intelligence across the industry. “More than any other advance, it has transformed chatbots from a curiosity into mainstream technology,” Metz writes of human feedback aligned with Generative AI.

Metz’s capable piece discusses the role that expert humans are playing in making Generative AI work, with some implication that we should get non-experts involved, too.  In response to the story, one Twitter wag suggested that Expert Systems are making a comeback. If so, guess we will have to make-do with more expert humans until “The Real AI” comes along! – Jack Vaughan

 

Deconstructing and re-combining on the way to Turing: On Metcalfe

June 16, 2023 By Jack Vaughan

i. Taking a little time to review some writing from earlier this year… One especially memorable for me. For it was a welcome opportunity to interview networking pioneer Bob Metcalfe as he was named recipient for the 2022 ACM A.M. Turing Award for the co-invention, standardization and commercialization of Ethernet.

I won’t reconjure the full story but something I didn’t run in the final draft is something I’d like to share here. It’s a bit about his youth and early interest in technology. I find the pre-histories can be very telling, or just simply interesting. In the run-up to the typical story, the origin questions aren’t a fit. But let’s go back to those thrilling days of yesteryear.

Like ham radios, model trainsets were a path to engineering in Metcalfe’s youth in the 1950s. The model railroad arose in Metcalfe response to my origin question. But in a roundabout, and perhaps surprising way. A trainset was gifted and then built as a father-son bonding experience , and then was atomized – its relays, lights and toggle switches removed from casings and repurposed into something else entirely. As Metcalfe tells it:

My father was a gyroscope technician. And we built a railroad set on a four by eight piece of plywood that we painted green, and then I took the pieces of the control system for the railroad and I built what my eighth-grade teacher called a computer. It could add any number between one, two and three to any other number between one, two and three, and show the result by turning on a light between two and six. So those were the beginnings of my computer days. I never went back to the model railroad. Eventually I started programming computers.

It would be an overstatement to suggest this was the spark that launched Metcalfe’s life-long study in communications technologies and applications. But it has a place in his memory, and it discloses one of the engineer’s methods on the road to invention –  what G. Polya described as ‘decomposing and recombining’ important operations in the mind. Decomposed elements are combined in a new manner. Such possibly can send a lad or lass to a science fair exhibition, and on to the institute. It’s likely essential to most problem solving.

ii. Some more. Now, in the popular mind Ethernet has taken a decided backseat to Internet, as I noted in the VentureBeat piece. Still, Metcalfe rightly boasted that Ethernet deserves a lot of credit for proving incredibly resilient and potent. As he has said:

Ethernet was media-independent from the beginning. In retrospect, that was an important insight because Ethernet has been on every medium since.

That’s’ borne out today when you look at the ChatGPT cauldron. The AI asks for GPUs, but to scale them you have to connect them. Right now, Nvidia holds the winning hand with Infiniband. But its not alone. An Ethernet update running under the banner RoCE (RDMA over Converged Ethernet, pronounced “Rocky”) is out there punching today, taking some – but not  all – parts of the latest data center mission  — to move LLM vectors around in generative ML models.

Is it faster than Nvidia’s favored Infiniband? No. Is it a brewing de jure standard? Yes.

Worth recalling: De jure Ethernet coming out of the work of Metcalfe, co-inventor David Boggs, and others helped advance networked PCs. Another force was government-influenced second-sourcing of key semiconductors. Not exactly the fashion of the moment, but things change.

Some background on Me and Ethernet: When I was in grad school I did my thesis on Local Area Networks – it was a last minute arbitrary selection done at the suggestion of my wonderful classmate Richard Mack. I did the work of the technology assessment under the direction of my tech guru, brilliant beyond words Prof. Kirt Olsen. Cut my teeth! Catching time over more than a year, while working full-time night job at BU Mugar Library, I came up with the estimate that in five years LANs would be a $1B industry. I couldn’t believe it – but it happened. What an honor it was for me to interview Bob Metcalfe, the most pivotal and compelling individual figure in those developments! He has a great spirit still today, with enthusiasm and curiosity. Why I mention curiosity: He was a publisher and pundit in his time, too, you know – and asked me how what formerly was called the trade press was doing these days. – Jack Vaughan

Note: “How to Solve It” Cover by George Giusti/Typography by Edward Gorey – Book PDF Download

Also shown: Bob Metcalfe on a Zoom call with VentureBeat contributor Jack Vaughan.

 

 

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to Next Page »

Progressive Gauge

Copyright © 2025 · Jack Vaughan · Log in

  • Home
  • About
  • Blog
  • Projects, Samples and Items
  • Video
  • Contact Jack