Advances in Artificial Intelligence have always been muted, moving forward in spasms. When these advances take the form of “AI,” they are cloaked in a mystic veil – that changes as they make it to production.
When AI models actually work, they take on the form of castoffs. Once-new technologies like machine vision, circuit optimization, or voice recognition throw away their AI badge and are sprouted unannounced in fertile ground.
AI that doesn’t sprout continues a long trudge through academic research labs, questing for the holy grail that usually amounts to “predicting the future.” Then they sprout anew. The much-discussed algorithms of Amazon, Google, Facebook and Cambridge Analytica have a pre-history and are cases in point. Their makers don’t mind if some of the veil of mystic shroud remains.
This is what it looks like today – Amazon predicts what book you’ll buy, Google predicts which ad to serve you, and Facebook predicts which meme of the moment will hold your attention.
The purpose of Cambridge Analytica’s algorithm was to predict which profiles could be pinged to move an election. To the extent this may have influenced an election, it might be the most powerful algorthm of all. In any case, all these wonks’ algorithms had precursors in Collaborative Filtering and Software Agents – AI efforts of yore. Today’s they prowl the Web inside Recommendation and Personalization engines.
Such precursors are forgotten in the rushing currents of news. Behind the algorithms is a history populated by people with the requisite defects that humans show, which a recent book by Harvard historian Jill Lepore uncovers in engaging detail.
In “If Then: How the Simulmatics Corporation Invented the Future” [W.W. Norton, 2020], Lepore explores a 1960s corporation that sought to apply what is now called data science to politics and other domains, doing the algorithms by hand but then using the mainframe computers of the day, to predict social outcomes.
A central figure in her narrative is Ithiel de Sola Pool, Cold War Era MIT political scientist and Simulmatics co-founder of Simulatics Corp. Like more than a few of today’s machine learning seers, his predictions were couched in bias, hyped with unwarranted certitude, and leavened with short cuts. Mispredictions eventually gained Simulmatics a reputation as an oversold under-deliverer.
It all makes for a good read. In Lepore’s exploration, de Sola Pool flies to the sun and falls victim to a pseudoscience of prediction. The caution in the tale should not be lost on the masters of Silicon Valley today.
De Sola Pool’s company took its name from the combination of ‘simulation’ and ‘automatic.’ The AI lineage lies in the simulation of models, and the quest for automation.
The title “If Then” derives from Simulmatic’s focus on automated reasoning and data driven computer-based behavioral simulations – ‘what if’ analyses or game playing that is central still to the algorithms of today’s Big Data giants.
During World War II de Sola Pool had worked within the Dept of Defense, studying propaganda, combining statistical and psychological analysis. Like others, he wondered how new computer technology, first applied for artillery firing tables and nuclear reaction modeling, could be applied to other problems.
So, Simulmatics was not alone. There was the Rand Corp., the Army Math Research Center at the University of Wisconsin, and others. Simulmatics was part of a small handful of early efforts to apply the computer to prediction in social and, eventually, military realms, but it was early to gain attention. The company’s first most vivid appearance coming in its work on the part of the Kennedy campaign in 1960.
Kennedy’s use of computers to target voter blocs was controversial, more so than the work of Cambridge Analytica was controversial in 2016. Generally, automation was no more welcomed in the ’60s then than now. Bad ink and some bad predictions caused a smooth talking de Sola Pool to lead Simulmatics toward perceived greener pastures of military planning and operations efficiencies for the Defense Dept in South Vietnam. That, as war raged and its political polling work dried up.
In the middle ‘60s, the company’s staff would interview peasants, give them Rorschach tests, churn data simulations in computers, predict likely futures and file reports – all to little obvious immediate effect. Later, back at MIT, as anti-war sentiment swelled, de Sola Pool was the object of a mock trial staged by protesters for war crimes . By then, Simulmatics was bankrupt.
On this and other points author Lepore is perceptive. The principals thought the mathematics of the targeting of messages derived from the same mainframe-based modeling means used to target missiles, she writes, not missing opportunities to highlight the researchers’ missteps and hubris.
It is not easy to measure the life work of de Sola Pool, but this is more than a start. At one point Lepore turns to famed whistle-blower and one-time Rand data analyst Daniel Ellsberg for a verdict on de Sola Pool, and Ellsberg is not shy:
“He was a very charming guy. Still I thought of him as the most corrupt social scientist I had ever met, without question.”
Lepore’s “If Then” includes a heaping helping of humanist viewpoint. She is highly aware of the mystical aura to which some technologists aspire. It’s a short plank’s walk to hubris, she suggests, as in her writing here on the then then incredibly influential Ford Foundation:
“…Ford asked social scientists to make prediction the entire object of their research, to claim knowledge – empirical, probabilistic, mathematical knowledge – of what would happen next. This however was no less mystical than the ancient art of prophecy.”
With “If Then” Lepore weaves a sometimes off-beat tale – this is the ‘60s, right? — of what I would call the Roots of Recommendation. Things – relationships, particularly — come together and fall apart in fairly quick order. Some bits are like Quentin Tarantino’s “Once Upon a Time in Hollywood” while others are more firmly attached to the tone of your more usual history of technology.
The extent of contemporary backlash against Simulmatics’ activity is among personal discoveries Lepore recounts in a Zoom interview she did with RPI and Harvard computer science researcher Fran Berman for Harvard’s Data Science Initiative. [I concur with many of the the points she makes in the discussion, especially when she suggests that, for tomorrow’s data scientists, studying computer history might be more valuable than straight-up studying “ethics.”]
I found “If Then” informative and, in the light of the zest for AI so common today, especially compelling. I hope your interest is piqued sufficiently, and that you will click on the YouTube video below, and check out the book too. My Recommendation Engine recommends it! – Jack Vaughan