[September 7, 2020] – Something new happened when newly automated algorithms of machine learning met the big, 21st Century data troves of Google and Facebook. Spring had sprung!
But, if you look at some of the smoke signals drifting out of the Valley you may wonder if scenes from an earlier AI Winter may be replayed.
Distinguishing cats from dogs is one thing, autonomous driving is another.
AI’s forward steps are often accompanied by equal sidesteps. But its fortune continues to rest on the fulfillment of machine learning methods based on neural nets. After all, that has been the biggest driver behind the upsurge in AI.
It has been exciting to watch the rebirth, but caution should still obtain. That’s the core of my analysis as a reporter – one who saw neurals enter a long winter, and later emerge from a time in obscurity.
Problems that deep learning — machine learning’s latest best hope — faces include:
*The vast amounts of computing needed to train the deep learner,
*The massive quantities of data and electricity such computing entails, and
*The algorithms’ questionable ability to adapt to unexpected data patterns like those a Black Swan Covid-19 Pandemic can serve up.
Faults like these are well known within the machine learning community, and were much discussed in the technical confabs that preceded the general cultural slowdown that the pandemic has wrought. Yes, they are working on it.
To be fair, when AI actually escapes from the lab it takes on a different persona. It becomes more businesslike and less magical. But the business argument for deep learning and AI itself are moving targets, as a monograph by a16z analysts tends to indicate.
In short, the AI edge that big data and cloud players like AWS, Google, and Microsoft have exhibited does not easily transfer to startups working in the AI vineyard.
After some writings earlier this year that questioned what kind of a moat AI startups can build around their business, Martin Casado and Matt Bornstein went on to consider how different an AI business may be from a garden variety software business. Their piece is entitled The New Business of AI and How It’s Different from Traditional Software.
While the overall future of software may technically take the path of AI, they conclude, the economics of the undertaking may more resemble a consultant/services business than the SaaS software model.
That is because there is a lot of work that goes into making machine learning work in special cases. It is unfortunate but the world is a maze of special cases.
Casado and Bornstein write:
AI is showing remarkable progress on a range of difficult computer science problems, and the job of software developers – who now work with data as much as source code – is changing fundamentally in the process.
Many AI companies (and investors) are betting that this relationship will extend beyond just technology – that AI businesses will resemble traditional software companies as well. Based on our experience working with AI companies, we’re not so sure.
Compared to your regular software enterprise, AI companies have lower gross margin (due to heavy cloud infrastructure and labor costs), greater scaling challenges in solving edge cases and weaker defensive moats due to the commoditization of AI models.
Our a15z authors have probably arrived at this conclusion after having more meetings with AI dragon slayers than we here have had hot lunches this year.
That AI, deep learning, and machine learning face challenges should not be a surprise. But these techs should continue to face scrutiny, especially when it comes to some of the further reaches of use cases, as will be discussed below.
One of the most enchanting movies in recent memory is The Vast of Night. Something like a slicked up The Blob or a science-fiction version of The Last Picture Show, the film is set on a summer night in the 1950s in New Mexico, and something akin to UFOs is on the prowl. The Vast of Night centers on a boy who is DJ at a radio station and a girl who runs a telephone switchboard and their encounter with something vague and eerie.
In the setup, it is clear the boy, Everett, is a technical geek and a cynic. Meanwhile the girl, Fay is enthusiastic and wide-eyed – anxiously looking to the future. The film takes on the eternal glow of an old tube amp as Fay gushes to Everett about the technological wonders ahead – the ones she has read about in Popular Science – and he squints skeptically.
This world of the future is not far away, she assures, and it will include portable video phones and self-driving cars. Her naivete would draw chuckles in a theatre, as the cell and video phone have arrived. It brought recognition to me.
That is because in my own little time-capsule movie, an 8th grade class field trip had allowed me to take part in a video phone call to a girl at the New York World’s Fair in 1964.
Later, I told the folks at dinner back home that “TV phones are coming soon.”
The family did not buy it. The advice was: “Eat your dinner.” They were right.
It wasn’t until this year’s Pandemic-driven adventure in Zoom-A-Rama, that I personally became a regular user of such video communications capabilities.
The GM exhibit at that same World’s Fair showed automated highways – a dream still deferred.
Today, the self-driving car is pursued as the litmus test for deep learning and other AI techniques. But its ascent is still in question, and ‘Where is my self-driving car?’ has joined ‘Where is my flying car?” among popular memes.
Which leads us to a discussion of Starsky Robotics. In February, as the Coronavirus began to spread, this driverless trucking company shut down. Even though it was the first company to publicly run a 7-mile load without driver, it failed to obtain financing for further endeavor.
Company co-founder Stephan Seltz-Axmacher was philosophical about the failure and wrote an oft quoted blog on Medium.com that outlines the challenges ahead for such AI undertakings.
The biggest insight he offers there perhaps is this: “Supervised machine learning doesn’t live up to the hype.”
Early this summer as I sat in on a Robotics Business Review webinar on manual vehicle automation, it seemed a good occasion to ask analyst Rian Whitton (ABI Research) for his take on the Starsky experience.
Whitton is a deft researcher and observer on robotic automation – no rose-colored glasses, either. Whitton recommended the Medium blog entry to anyone interested in automation and AI. It raises questions about whether automated systems would finally be up to task, and whether the business prerequisites are sufficiently aligned.
Whitton boils the question down to: “Does more data equal a better system, to the point where eventually, through superior algorithms training and more edge cases, the self-driving car revolution would simply come about?”
He like others points out that it becomes increasingly expensive to obtain data to actually improve the necessary algorithm or program.
“In a sense,” he says, summing Seltz-Axmacher’s thesis, “the logic — that simply through more testing and more deployments, self-driving cars become safer — doesn’t actually hold water.
That’s a problem awaiting anyone waiting on the Next AI Golden Age. But, Whitton notes, something short of full autonomy may show actual cost benefits, and could spell some type of progress.
It may be the case that much of AI and machine learning in time to come will be about paring down the problem until it can be efficiently solved. – Jack Vaughan
The Vast of Night – IMDB
The End of Starsky Robotics – Medium
The Automation of Manual Vehicles: Insights, Analysis and Opportunities – Robotics Business Review [Reg req]
The New Business of AI and How It’s Different from Traditional Software – a16z.com
Machine learning tools pose educational challenges for users – TechTarget