Silent ecstasy?
Striking sculpture spotted in the Fitzwilliam Museum.
Quote of the Day
“Education isn’t something you can finish”.
Isaac Asimov
Musical alternative to the morning’s radio news
Scott Joplin | Elite Syncopations | Phillip Dyson
At last, an unobjectionable use of ‘elite’ as an adjective!
Long Read of the Day
Fully automated data driven authoritarianism ain’t what it’s cracked up to be
Marvellous essay by Henry Farrell which, among other things, has an elegant takedown of the sainted Yuval Noah Harari.
Last September, Abe Newman, Jeremy Wallace and I had a piece in Foreign Affairs’ 100th anniversary issue. I can’t speak for my co-authors’ motivations, but my own reason for writing was vexation that someone on the Internet was wrong. In this case, it was Yuval Harari. His piece has been warping debate since 2018, and I have been grumpy about it for nearly as long. But also in fairness to Harari, he was usefully wrong – I’ve assigned this piece regularly to students, because it wraps up a bunch of common misperceptions in a neatly bundled package, ready to be untied and dissected.
Specifically, Harari argued that AI (by which he meant machine learning) and authoritarian rule are two flavors that will go wonderfully together. Authoritarian governments will use surveillance to scoop up vast amounts of data on what their subjects are saying and doing, and use machine learning feedback systems to figure out what they want and manipulate it, so that “the main handicap of authoritarian regimes in the 20th century—the desire to concentrate all information and power in one place—may become their decisive advantage in the 21st century.” Harari informed us in grave terms that liberal democracy and free-market economics were doomed to be outcompeted unless we took Urgent But Curiously Non-Specific Steps Right Now.
In our Foreign Affairs piece, Abe, Jeremy and I argued that this was horseshit…
Read on to see why.
Footnote. Growing up in the 1950s I was, like most of my peers, an avid reader of ‘The Lone Ranger’ comic strip which starred a brave (and, of course, white) cowboy and his faithful Indian companion/servant, Tonto. In general, Tonto had only one response to his master’s utterances — “Kemo Sabay”, which in our innocence we assumed to be Native-American-speak for “Yes boss”, or words to that effect. Imagine my delight when, a few years ago, someone told me that in fact it means ‘horseshit’ in some indigenous language or other.
As they say in Italy: if it isn’t true it ought to be.
My commonplace booklet
From Jeff Jarvis…
Perhaps LLMs should have been introduced as fiction machines.
ChatGPT is a nice parlor trick, no doubt. It can make shit up. It can sound like us. Cool. If that entertaining power were used to write short stories or songs or poems and if it were clearly understood that the machine could do little else, I’m not sure we’d be in our current dither about AI. Problem is, as any novelist or songwriter or poet can tell you, there’s little money in creativity anymore. That wouldn’t attract billions in venture capital and the stratospheric valuations that go with it whenever AI is associated with internet search, media, and McKinsey finding a new way to kill jobs. As with so much else today, the problem isn’t with the tool or the user but with capitalism. (To those who would correct me and say it’s late-stage capitalism, I respond: How can you be so sure it is in its last stages?)
And this…
The most dangerous prospect arising from the current generation of AI is not the technology, but the philosophy espoused by some of its technologists.
I won’t venture deep down this rat hole now, but the faux philosophies espoused by many of the AI boys — in the acronym of Émile Torres and Timnit Gebru, TESCREAL, or longtermism for short — is noxious and frightening, serving as self-justification for their wealth and power. Their philosophizing might add up to a glib freshman’s essay on utilitarianism if it did not also border on eugenics and if these boys did not have the wealth and power they wield. See Torres’ excellent reporting on TESCREAL here. Media should be paying attention to this angle instead of acting as the boys’ fawning stenographers. They must bring the voices of responsible scholars — from many fields, including the humanities — into the discussion. And government should encourage truly open-source development and investment to bring on competitors that can keep these boys, more than their machines, in check.
Yep.
Linkblog
Noticed while drinking from the Internet firehose.
China’s answer to ChatGPT Some bright spark on the Economist has been trying it out.
Ernie bot has some controversial views on science. China’s premier artificial intelligence (ai) chatbot, which was released to the public on August 31st, reckons that covid-19 originated among American vape-users in July 2019; later that year the virus was spread to the Chinese city of Wuhan, via American lobsters. On matters of politics, by contrast, the chatbot is rather quiet. Ernie is confused by questions such as “Who is China’s president?” and will tell you the name of Xi Jinping’s mother, but not those of his siblings. It draws a blank if asked about the drawbacks of socialism. It often attempts to redirect sensitive conversations by saying: “Let’s talk about something else.”
It’ll get better — if it’s allowed to.