Thursday 1 July, 2021
The Bee’s-eye view
A Foxglove in our garden, this morning.
Quote of the Day
”Suburbia is where the developer bulldozes out the trees, then names the streets after them.”
Bill Vaughan
Musical alternative to the morning’s radio news
The Travelling Wilburys | Handle With Care | Concert For George Live | 2002
Long Read of the Day The Devil’s Dictionary of AI talk
Wonderful compendium by Karen Hao. Think of it as Ambrose Bierce’s take on so-called ‘AI’.
accuracy (n) – Technical correctness. The most important measure of success in evaluating an AI model’s performance. See validation.
adversary (n) – A lone engineer capable of disrupting your powerful revenue-generating AI system. See robustness, security.
alignment (n) – The challenge of designing AI systems that do what we tell them to and value what we value. Purposely abstract. Avoid using real examples of harmful unintended consequences. See safety.
artificial general intelligence (phrase) – A hypothetical AI god that’s probably far off in the future but also maybe imminent. Can be really good or really bad whichever is more rhetorically useful. Obviously you’re building the good one. Which is expensive. Therefore, you need more money. See long-term risks.
audit (n) – A review that you pay someone else to do of your company or AI system so that you appear more transparent without needing to change anything. See impact assessment.
augment (v) – To increase the productivity of white-collar workers. Side effect: automating away blue-collar jobs. Sad but inevitable.
beneficial (adj) – A blanket descriptor for what you are trying to build. Conveniently ill-defined. See value.
compliance (n) – The act of following the law. Anything that isn’t illegal goes.
data labelers – The people who allegedly exist behind Amazon’s Mechanical Turk interface to do data cleaning work for cheap. Unsure who they are. Never met them.
democratize (v) – To scale a technology at all costs. A justification for concentrating resources. See scale.
diversity, equity, and inclusion – The act of hiring engineers and researchers from marginalized groups so you can parade them around to the public. If they challenge the status quo, fire them.
efficiency (n) – The use of less data, memory, staff, or energy to build an AI system.
ethics board – A group of advisors without real power, convened to create the appearance that your company is actively listening. Examples: Google’s AI ethics board (canceled), Facebook’s Oversight Board (still standing).
ethics principles – A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI.
explainable (adj) – For describing an AI system that you, the developer, and the user can understand. Much harder to achieve for the people it’s used on. Probably not worth the effort. See interpretable.
fairness (n) – A complicated notion of impartiality used to describe unbiased algorithms. Can be defined in dozens of ways based on your preference.
for good – As in “AI for good” or “data for good.” An initiative completely tangential to your core business that helps you generate good publicity.
foresight (n) – The ability to peer into the future. Basically impossible: thus, a perfectly reasonable explanation for why you can’t rid your AI system of unintended consequences.
framework (n) – A set of guidelines for making decisions. A good way to appear thoughtful and measured while delaying actual decision-making.
generalizable (adj) – The sign of a good AI model. One that continues to work under changing conditions. See real world.
governance (n) – Bureaucracy.
human-centered design – A process that involves using “personas” to imagine what an average user might want from your AI system. May involve soliciting feedback from actual users. Only if there’s time. See stakeholders.
human in the loop – Any person that is part of an AI system. Responsibilities range from faking the system’s capabilities to warding off accusations of automation.
impact assessment – A review that you do yourself of your company or AI system to show your willingness to consider its downsides without changing anything. See audit.
interpretable (adj) – Description of an AI system whose computation you, the developer, can follow step by step to understand how it arrived at its answer. Actually probably just linear regression. AI sounds better.
integrity (n) – Issues that undermine the technical performance of your model or your company’s ability to scale. Not to be confused with issues that are bad for society. Not to be confused with honesty.
interdisciplinary (adj) – Term used of any team or project involving people who do not code: user researchers, product managers, moral philosophers. Especially moral philosophers.
long-term risks (n) – Bad things that could have catastrophic effects in the far-off future. Probably will never happen, but more important to study and avoid than the immediate harms of existing AI systems.
partners (n) – Other elite groups who share your worldview and can work with you to maintain the status quo. See stakeholders.
privacy trade-off – The noble sacrifice of individual control over personal information for group benefits like AI-driven health-care advancements, which also happen to be highly profitable.
progress (n) – Scientific and technological advancement. An inherent good.
real world – The opposite of the simulated world. A dynamic physical environment filled with unexpected surprises that AI models are trained to survive. Not to be confused with humans and society.
regulation (n) – What you call for to shift the responsibility for mitigating harmful AI onto policymakers. Not to be confused with policies that would hinder your growth.
responsible AI (n)- A moniker for any work at your company that could be construed by the public as a sincere effort to mitigate the harms of your AI systems.
robustness (n) – The ability of an AI model to function consistently and accurately under nefarious attempts to feed it corrupted data.
safety (n)- The challenge of building AI systems that don’t go rogue from the designer’s intentions. Not to be confused with building AI systems that don’t fail. See alignment.
scale (n)- The de facto end state that any good AI system should strive to achieve.
security (n) – The act of protecting valuable or sensitive data and AI models from being breached by bad actors. See adversary.
stakeholders (n) – Shareholders, regulators, users. The people in power you want to keep happy.
transparency (n) – Revealing your data and code. Bad for proprietary and sensitive information. Thus really hard; quite frankly, even impossible. Not to be confused with clear communication about how your system actually works.
trustworthy (adj) – An assessment of an AI system that can be manufactured with enough coordinated publicity.
validation (n) – The process of testing an AI model on data other than the data it was trained on, to check that it is still accurate.
value (n) – An intangible benefit rendered to your users that makes you a lot of money.
values (n) – You have them. Remind people.
withhold publication – The benevolent act of choosing not to open-source your code because it could fall into the hands of a bad actor. Better to limit access to partners who can afford it.
Where there’s a grille there’s a vent
Absolutely fascinating Guardian piece by Oliver Wainwright on Inventive Vents: A Gazetteer of London’s Ventilation Shafts, a new book that celebrates the disguised vents, shafts and funnels that help London’s underground breathe.
A gas lamp still flickers on the corner of Carting Lane in the City of Westminster, adding a touch of Dickensian charm to this sloping alleyway around the back of the Savoy Hotel. The street used to be nicknamed Farting Lane, not in reference to flatulent diners tumbling out of the five-star establishment, but because of what was powering the streetlamp: noxious gases emanating from the sewer system down below.
The Sewer Gas Destructor Lamp, to give the ingenious device its proper patented name, was invented by Birmingham engineer Joseph Webb in 1895, and it still serves the same purpose today. As a plaque explains, it burns off residual biogas from Joseph Bazalgette’s great Victorian sewer, which runs beneath the Victoria Embankment at the bottom of the lane.
Thanks to Charles Arthur for the link.
How to ask better questions
I’ve often thought that the one thing that marks out the brilliant people I’ve known is that they ask questions that open up areas of inquiry that others have ignored, or been unaware of. A reader of Tyler Cowen’s terrific blog asked him how does he manage to do this. Here’s his reply:
Highly specific questions are better on average.
It is often better to preface a question with a confession of some sort, or with information from yourself. That sets a standard for the respondent. Set that standard high!
Demonstrate credibly that you are truly listening and that you care about the answer.
With any possible question, ask yourself in advance: can the person being asked the question respond too easily in a vague and not very useful way? “Why did a write a book about Napoleon? Well, let me tell you, French history always fascinated me.” etc. If that is the kind of slop you might get back in response, try making the question more pointed or more specific.
High status people get better answers than do low status people. So be high status. Or at least credibly pretend to be high status.
I have enjoyed Gregory Stock’s The Book of Questions.
You might say “listen to other interviewers.” Well, maybe, but perhaps not too much? They will encourage you, by default, to ask the same questions that everyone else does. And too many of the sources available to you are mega-famous people who are getting by using their fame to boost the significant of their questions. (Anything Oprah might ask me would be interesting per se.) So use this standard tip sparingly and with caution.
Any questions about all this?
Other, hopefully interesting, links
Artist Makes Portraits That Age As You Move Around Them. Interesting technique, slightly scary. Link
Facebook is launching a clone of Substack. This is what Zuckerberg calls ‘innovation’ — basically lifting other people’s good ideas. Link
NASA Chief Says He Believes Aliens Are Real. Of course they are. Hasn’t he ever heard of Rudi Giuliani? Or Dominic Cummings? Link