AI Tinkerbell
Some mistakes are years in the making.
My friend recently recommended me to a dentist who was ‘the real deal’. This dentist identified that my friend had gum recession due to not properly brushing his teeth and put him on a path to fixing the issue. My friend, deeply impressed, recommended his partner who ended up having the exact same problem. And then when it was my turn, it was the exact same problem, again!. What were the odds?
‘If you correct your behaviour now, your teeth aren’t at risk of falling out. But you need to correct your behaviour now’.
Although I was relieved to have caught this issue before things got really bad, I was also indignant that I was in the situation in the first place. It’s one thing to go your entire life choosing not to brush your teeth and then suffering the consequences for it. It’s quite another to, day after day, spend at least ten minutes of your life draining your scarce motivation levels to do something you don’t enjoy, incorrectly.
Like all good people who make a mistake, I started looking for someone to blame.
I could blame myself, but I was oblivious to the fact I was brushing my teeth wrong: it was an unknown unknown.
Could I blame the education system? I never took a class where I had to practice brushing my teeth. The closest thing to dental education was playing a video game in the school computer room where you had to shoot germs and collect toothpaste (immensely fun; evidently futile). I could campaign to have tooth brushing lessons added to the school curriculum, but if everything that people said ‘they should teach that in school instead of trigonometry!’ was actually added to the curriculum, we’d be in school our entire lives.
Could I blame my parents? Possibly, but you aren’t required to get a diploma in parenthood before becoming a parent. Parents make shit up as they go: they have just as many unknown unknowns as everybody else. Besides, maybe my parents did correctly teach me how to brush my teeth as a kid, and then somewhere along the way I forgot what I was supposed to be doing.
Maybe I needed to think about this from another perspective. If I was a 17th century European smallpox victim, I could blame the person I contracted the disease from, I could blame the pathogen itself (which would be impressive given that the germ theory of disease came about two centuries later), or I could just blame the fact that I was born too soon, before a smallpox cure could be discovered.
In that vein, I’m pinning the blame on the lack of a cure for ignorance.
Although as a society there is plenty we are still ignorant about (e.g. how was the universe created, why do AI models have such terrible naming schemes, etc), there’s also plenty of knowledge which is indisputable. Lead paint is bad. Asbestos is bad. Skydiving without a parachute is bad. And there is a right and a wrong way to brush your teeth. There’s no good reason that individuals should be ignorant about these facts.
But we already have the entirety of human knowledge on the internet: even though nowadays you might reach to chatGPT to answer you random queries, we’ve all been able to google search anything we want for decades. Why isn’t that enough to avail us of our ignorance?
Because we’re not talking about known unknowns, we’re talking about unknown unknowns. And you can’t get an answer to a question that you don’t realise needs to be asked.
Some knowledge acquisition takes the form of a pull process: I come up with a question, and then I go and get the answer. Then there are cases where it’s a push process: the education system pushes a bunch of knowledge to you whether you like it or not, as do your parents, your friends, managers, movies, advertisements, and so on.
The pull process is almost perfected, but the push process remains sorely underpowered. And it’s only the push process which can address unknown unknowns. If you’re sick on the day of school that they teach you about the dangers of drinking methanol and years later you’re out of alcohol and looking through your cupboards for a substitute, tough luck.
What’s more, the push process is too crude: it’s not personalised. Although some advice, like tooth brushing, is fairly black and white, Scott Alexander’s essay Should you reverse any advice you here? makes the case that in many domains, different people need to hear completely contradicting pieces of advice. For example:
“You need to be more conscious of how your actions in social situations can make other people uncomfortable and violate their boundaries” versus “You need to overcome your social phobia by realizing that most interactions go well and that probably talking to people won’t always make them hate you and cause you to be ostracized forever.”
Or “You need to be less selfish and more considerate of the needs of others” versus “You can’t live for others all the time, you need to remember you deserve to be happy as well.”
So we can’t depend on a one-size-fits-all model like government PSA’s or mandated school curriculums to cure our ignorance.
What do we need to fix the current gaps in the push process? We need to combine intelligence with ongoing observation.
We’ve got plenty of existence proofs of this working in other domains. To use a shamelessly self-serving example, my company Subble plugs into our customer’s systems, continuously monitors what’s going on with their SaaS licences (observation), and then alerts (intelligence) on unused/overprovisioned licences. Ex-employees who retain access to systems years after leaving the company, despite existing IT processes to stop that exact problem, is the company equivalent to brushing your teeth wrong your whole life!
In these other domains, the observability problem can be a tricky engineering challenge. Luckily in our personal lives it’s easy: all you need is a camera.
A camera, floating above your shoulder, like an AI Tinkerbell. If it catches you brushing your teeth wrong, it tells you. If it notices you have bad form at the gym, or while jogging, it tells you. If you initiate a questionable payment to a Nigerian prince who you’ve corresponded with over email, it tells you.
Sounds awesome. What are the downsides?
Firstly, privacy. Given that we’re already imagining a floating orb of intelligence above your shoulder, let’s go even more sci-fi and assume our AI Tinkerbell is advanced enough that it can process everything internally and doesn’t need an internet connection. Even in that situation, the conspicuous camera on your shoulder is going to make everybody else in the room a little more on-edge knowing that their every move is being recorded.
Second: ineffectiveness. Humans benefit enormously from feedback, but that doesn’t mean they appreciate receiving it in the moment. How many times as a kid did you contemplate running away from home after being fed up with all the criticism (constructive or otherwise) your parents piled onto you? If I had an AI tinkerbell hovering over my shoulder right now, would I actually want to hear what it had to say?
‘You shouldn’t be using a laptop in bed, there’s way too much pressure on your spine from having your neck propped up by those pillows’.
‘You should have wrapped up this post five paragraphs ago, it’s stretching on for far too long’.
‘You should have gotten up and had breakfast before starting on this post because a lack of routine in your life is affecting your productivity levels’.
At what point do you just turn the thing off because you’re sick of the constant nagging?
This brings us to the third and final concern: customisation. If my AI Tinkerbell is nagging me too much, I won’t turn it off, I’ll just customise it. ‘Only tell me about mistakes I’m making that could actually lead to terrible outcomes, not just minor improvements I can make’. Or ‘Maybe give me some praise for once! I want to be more of a risk taker, so whenever I take a big risk, praise me for it, and criticise me whenever I play it safe’.
If everybody had a personal coach, coaching them towards whatever aspirations they devised, would the world be a better place? It all depends on what people’s aspirations are. If your AI Tinkerbell is going to help you become the next Pablo Escobar, and it’s actually a good coach, that might be great for you, but not so great for society.
Thinking through these downsides, I still feel that an AI Tinkerbell would be a net positive for the world. Addressing all the unknown unknowns in our personal lives, helping us stay healthy, and keeping us on an upwards trajectory as we navigate life’s various challenges.
Who knows, maybe the V2 will come with a mechanical arm to brush my teeth for me?
I’d buy one.
Shameless plug (which appears on every blog post, not just this one, so don't think that I'm opportunistically posting this specific post just for the sake of doing this plug): I recently quit my job to co-found Subble, a web app that helps you manage your company's SaaS software licences. Your company is almost certainly wasting time and money on shadow IT and unused/overprovisioned licences and Subble can fix that. Check it out at subble.com