<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://jesseduffield.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://jesseduffield.com/" rel="alternate" type="text/html" /><updated>2026-03-03T17:56:54+11:00</updated><id>https://jesseduffield.com/feed.xml</id><title type="html">Pursuit Of Laziness</title><subtitle>A blog by Jesse Duffield</subtitle><entry><title type="html">You think unemployed, I think retired</title><link href="https://jesseduffield.com/You-Think-Unemployed-I-Think-Retired/" rel="alternate" type="text/html" title="You think unemployed, I think retired" /><published>2026-03-01T00:00:00+11:00</published><updated>2026-03-01T00:00:00+11:00</updated><id>https://jesseduffield.com/You-Think-Unemployed-I-Think-Retired</id><content type="html" xml:base="https://jesseduffield.com/You-Think-Unemployed-I-Think-Retired/"><![CDATA[<blockquote>
  <p>Work sucks, I know</p>

  <p>- blink-182</p>
</blockquote>

<p>There are many smart people in the AI safety crowd that think AI is going to kill everyone and if they’re right about that then this entire post is moot.</p>

<p>But so long as they’re <em>not</em> right about it, I’ve got a bone to pick.</p>

<p>You know when you think you’re on the same page as everybody else and then something happens and you realise that actually you’ve been on very different pages the entire time? Think back to post-season 4 Game of Thrones when the show was starting to show cracks and as it got progressively worse some people actually enjoyed it <em>more</em>, claiming that it was better now because things were moving faster.</p>

<p>Or when you and a friend really enjoyed the Lord of the Rings trilogy but then when the Hobbit trilogy came out they ended up <em>preferring</em> it over LOTR?? (Sorry, Annie)</p>

<p>That same feeling of despair floods my soul now.</p>

<p>People really don’t like the idea of AI taking everybody’s jobs and I just can’t relate. I can relate to the fear of you losing your individual job while most people keep theirs, but I can’t relate to the fear of everybody losing their jobs.</p>

<p>I’m one of the very few people who has found their passion (programming) and gets to do it every day. I’m one of the <em>lucky</em> ones, and I cannot wait until I no longer have to work.</p>

<p>The best times of my life have been when I’m not working. Highschool summer holidays were golden. You have enough time to unburden yourself from the mental shackles of intense study and you get to actually enjoy life without the nagging feeling that you’re neglecting your obligations.</p>

<p>Even the shorter holidays I’ve taken as an adult have been great. I took a week off to do a side project and play video games in December and it was fantastic. I had my phone on silent the entire time, I didn’t listen to any podcasts, or watch short form video content, or use social media, or any of the other cheap dopamine hits that are necessitated by an intense work schedule. As much as I wished that I could continue such a ‘wholesome’ routine after returning to work, I knew that was a fantasy.</p>

<p>As AI continues to advance and the number of people uninformed or pathologically contrarian enough to call LLMs mere ‘stochastic parrots’ continues to dwindle, there is on the horizon the potential for another fantasy, a life without work, to become reality.</p>

<p>You probably know multiple people (if not the majority of your friends and family, including yourself) who are immiserated by their job. It’s stressful, they hate their coworkers, they hate their managers, they hate their customers, it doesn’t pay well, it’s not fulfilling, sometimes the work is borderline unethical. You spend half your waking life at work: what kind of life is that to live?</p>

<p>Bad jobs aren’t just bad while you’re on the clock: they’re also bad while you’re off the clock. People’s brains are fried from working all day and all they have the strength to do afterwards is watch television, doom scroll, or partake in some other similarly unhealthy, undignified activity.</p>

<p>But, you say, being unemployed is <em>even worse!</em> I don’t buy it. Sure, being unemployed <em>today</em> means you’re wallowing in self pity, doubting your worth after countless job application rejections, watching your peers go to work while you produce nothing of value for society. That sounds shitty.</p>

<p>But guess who else produces nothing of value for society and loves life? Retirees! Same number of working hours (zero), completely different level of happiness. Anybody who’s done B2B cold calls knows how frequently you’ll call up a number, ask if the person still works at X company, and they’ll say ‘Actually I’m retired now’. I always ask how they find retirement and I’ve yet to hear somebody tell me that it wasn’t awesome. Why the difference? Because there’s no social expectation for retirees to bust their balls every workday, and because they have enough savings that they aren’t impoverished by their lack of work.</p>

<p>It is twisted that we live in a world where the only time you get to experience all that freedom is when your body is old and your mind has begun its decline.</p>

<p>When people bemoan mass unemployment from AI, all I hear is a lack of awareness about how bad the status quo is, and a lack of imagination for how good things could be.</p>

<p>But, you say, work is a source of meaning! Then why, given how low the unemployment rate is, are we living in a meaning crisis? Boy, my life would be so much more meaningful if I could learn an instrument, learn to cook, train for a marathon, volunteer in my community, raise a family, coach my kid’s sport team, write a novel, write a play, perform in a play, take up a sport, build some muscle, take up gardening, organise a neighbourhood event, learn a language, or any other fulfilling activity. Why am I not doing those things? Because I’m too busy bloody <em>working!</em></p>

<p>Perhaps your concern is that AI will hollow out the middle class while the rich get richer. But remember what kind of world we’re talking about here: a world where the cost of producing goods approaches zero because robots can farm, manufacture, and deliver anything. In a world like that it would be a genuine logistical challenge for the ultra-rich to <em>prevent</em> cheap goods from reaching you. They would have to coordinate to suppress abundance on a global scale. You don’t need to hope for benevolent oligarchs, even with malevolent ones you’re not going to go hungry.</p>

<p>Are there going to be genuine challenges that we as a society will have to face once work becomes a thing of the past? Of course. There’s going to be huge challenges, not least of which is the transition itself. But I’d much rather face those challenges than pull the plug on AI progress and maroon ourselves in the status quo, whose hellishness we’ve all unconsciously grown over-familiar with, and which we only now have a true chance of escaping.</p>

<p>If there’s one thing the AI safety crowd and I can agree on, it’s that I’ll be retiring early. I for one cannot wait.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Work sucks, I know - blink-182]]></summary></entry><entry><title type="html">Are AI agents cognitive Ozempic?</title><link href="https://jesseduffield.com/Are-AI-Agents-Cognitive-Ozempic/" rel="alternate" type="text/html" title="Are AI agents cognitive Ozempic?" /><published>2026-02-14T00:00:00+11:00</published><updated>2026-02-14T00:00:00+11:00</updated><id>https://jesseduffield.com/Are-AI-Agents-Cognitive-Ozempic</id><content type="html" xml:base="https://jesseduffield.com/Are-AI-Agents-Cognitive-Ozempic/"><![CDATA[<p>Ozempic is an equaliser. There’s only so far you can go in the direction of a healthy weight. If you were overweight, Ozempic is great news. But if you were already at a healthy weight? Well, now you’re less special. That’s what equalisers do: they compress the range of outcomes.</p>

<p>Substack is an amplifier.</p>

<p>Once upon a time, if you wanted to make a living as a writer, you had to convince an editor to hire you. That gatekeeping was unfair, but more importantly, it also compressed outcomes! A mediocre staff writer at the New York Times earned roughly the same salary as a brilliant one.</p>

<p>Then Substack removed the gate entirely. A small number of writers with existing audiences and distinctive voices now earn seven figures. The long tail earns nothing. Access has been equalised (you can go and make a Substack right now!) but <em>outcomes</em> have been amplified (nobody’s going to read your Substack).</p>

<p>The equalise-access, amplify-outcomes pattern is common to technology: the internet did it with music distribution, online courses did it with education, and Shopify did it to entrepreneurship. Every time, the gatekeepers are removed, the barrier to entry is reduced, but the underlying distribution of talent remains the same and is now acutely exposed.</p>

<p>So why do some technologies equalise outcomes and others amplify?</p>

<p>I think it comes down to ceilings. Equalisers have a <em>ceiling</em>: there’s a maximum benefit, and once you hit it, the tool stops helping. Ozempic gets you to a healthy weight and then it’s done. Robot vacuum cleaners don’t help the ultra-clean distinguish themselves from slobs because there’s only so many rooms in each house to vacuum. A tool that helps you work through a support ticket backlog gets you through that backlog and then it’s done. There’s nowhere further to go, so outcomes compress.</p>

<p>Amplifiers have no ceiling. The returns to skill keep scaling indefinitely.</p>

<p>Which brings us to agentic AI.</p>

<p>Joe Blow can now build a fully deployed website with a single prompt. Access: equalised. But can Mr Blow build a software product with two million lines of code and thousands of paying customers, each pushing for conflicting features? I’ve been vibe-coding with Opus 4.6 and it is genuinely some next-level sci-fi shit. But once I’ve told Opus to go off and build me a feature, am I sitting there twiddling my thumbs? No, I’m opening up another tab and spinning up another agent to build another feature. And I continue enlisting agents into my army until I hit the limits of my architectural, technical, product, and plain cognitive abilities; the things which have always been the primary bottleneck in knowledge work.</p>

<p>Right now, AI looks like a classic amplifier. It handles the median-skill parts of knowledge work, and the remaining value accrues to the people with judgment, taste, and vision. Skills that are <em>less</em> evenly distributed than the routine work AI replaces.</p>

<p>But is there a point where this breaks down? What if the waterline keeps rising, the set of things AI can’t do keeps shrinking, until one day the gap between a single prompt and a two-million-line codebase closes?</p>

<p>As much as I’d like to proselytise the timeless special-ness of the human mind, I believe that day <em>will</em> come. I have no idea when, but I expect an age of abundance to follow, and I hope that the people who pride themselves on their intelligence don’t lose too much self esteem upon finding themselves in a world where intelligence is as much a commodity as water.</p>

<p>Until that day, let’s see just how far we can turn the dial on this amplifier.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Ozempic is an equaliser. There’s only so far you can go in the direction of a healthy weight. If you were overweight, Ozempic is great news. But if you were already at a healthy weight? Well, now you’re less special. That’s what equalisers do: they compress the range of outcomes.]]></summary></entry><entry><title type="html">Escaping the Trifecta</title><link href="https://jesseduffield.com/Escaping-the-Trifecta/" rel="alternate" type="text/html" title="Escaping the Trifecta" /><published>2026-01-21T00:00:00+11:00</published><updated>2026-01-21T00:00:00+11:00</updated><id>https://jesseduffield.com/Escaping-the-Trifecta</id><content type="html" xml:base="https://jesseduffield.com/Escaping-the-Trifecta/"><![CDATA[<p>So you’ve read Simon Willison’s <a href="https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/">post</a> about the lethal trifecta and how, like the infinity stones on Thanos’ gauntlet, you really don’t want them all to be present at once.</p>

<p><img src="/images/posts/completing-trifecta/trifecta.png" alt="The Lethal Trifecta" /></p>

<p>Now you’re wondering how much utility you can squeeze out of your AI agent without ever exceeding two of the three trifecta legs.</p>

<p>Every time a new bug ticket is escalated, your AI agent living in the cloud is triggered and is tasked with solving the problem, typically in the form of a pull request.</p>

<p>Two things you realise:</p>

<ol>
  <li>It’s hard to solve production issues without access to production data</li>
  <li>It’s hard to solve production issues involving external systems without the ability to google things about those systems (e.g. ‘what does this error mean?’)</li>
</ol>

<p>Unfortunately, both production data AND google results can contain untrusted content. Production data is certainly private, and googling requires the ability to externally communicate.</p>

<p>So it’s not possible to have both production data access and general internet access without completing the trifecta.</p>

<p>Which do you choose to keep? Arguably whatever your agent might want to google is likely in its training set already, so perhaps it’s better to prioritise access to production data.</p>

<p>But do you need to choose? What if your agent had two phases: a research phase and an execution phase. In the research phase the agent does all the googling it wants, no matter how many prompt injection attempts are thrown at it. But as soon as it wants to query your production data, the googling ability is removed. So you never have all three legs of the trifecta in play at any one time.</p>

<p><img src="/images/posts/completing-trifecta/agent-workflow.svg" alt="Agent Workflow" /></p>

<p>One annoying implication of this approach is that if the agent actually does want to go back and do some more googling after looking at the production data, it will need to rope in a human to review its context and approve the phase reset.</p>

<p>The good news is that AI agents already need a couple rounds of human feedback anyway even without internet access, so it’s not too much to ask for a little more human feedback in exchange for a stronger security posture.</p>

<p>What’s crazy is that for many companies, AI is now sufficiently advanced that intelligence isn’t the bottleneck: privileges are. And like the Little Shop of Horrors, how much are you willing to feed it?</p>

<hr />

<p>Other things to note:</p>

<ul>
  <li>The original bug ticket may itself contain private data, meaning you’ve already completed the trifecta by the research phase. But a human is the one to escalate the ticket meaning the human can remove any confidential / sensitive information beforehand.</li>
  <li>The codebase that your agent has access to is also private data. You might decide that the codebase is actually not that sensitive compared to production data and allow the research phase to access it, or you might only grant access to the execution phase.</li>
  <li>We’re assuming that you’ve locked down everything else e.g. you’re cool with the LLM itself having access to production data etc.</li>
</ul>]]></content><author><name></name></author><summary type="html"><![CDATA[So you’ve read Simon Willison’s post about the lethal trifecta and how, like the infinity stones on Thanos’ gauntlet, you really don’t want them all to be present at once.]]></summary></entry><entry><title type="html">BYO Intelligence</title><link href="https://jesseduffield.com/BYO-Intelligence/" rel="alternate" type="text/html" title="BYO Intelligence" /><published>2025-12-31T00:00:00+11:00</published><updated>2025-12-31T00:00:00+11:00</updated><id>https://jesseduffield.com/BYO-Intelligence</id><content type="html" xml:base="https://jesseduffield.com/BYO-Intelligence/"><![CDATA[<blockquote>
  <p>Copy-paste functionality removed from major operating systems by 2027 or I eat my dick on national television</p>

  <ul>
    <li>Sam Altman*</li>
  </ul>
</blockquote>

<p>In the beginning, you would ask ChatGPT how to write an SQL query to pull some data. It would give you a query, and you would copy-paste it into your SQL app of choice and execute it. And it wouldn’t work because there was some error, so you copy-pasted the error into ChatGPT and then it would try again. And maybe it would ask for more context like what columns the tables have, etc.</p>

<p>As magical as LLMs are, that flow was terrible.</p>

<p>So then your SQL app released an update that included some LLM magic of its own, allowing you to get help writing queries, and the schema was loaded in as context automatically. Very cool! No more copy-pasting in and out of chat. Except that your SQL app had its own interface for this which differed from ChatGPT and also differed from every other platform’s newfound LLM functionality which made context switching painful, and the copy-pasting wasn’t over: you’d still end up pasting query results into Cursor so that it could help write the code to speed up a query.</p>

<p>So you thought: hang on, why don’t I just give Cursor’s AI agent read-only access to the database, and ask it to run the queries for me? This was amazing: you could tell Cursor about a slow part of your code, and watch it run that code, isolate the slow query, run an <code class="language-plaintext highlighter-rouge">EXPLAIN</code> on that query and then propose an index to add. Why stop there? Plug in GitHub to see which pull request added the query in the first place. Plug in AWS cloudwatch to see if there was a spike in DB load around that time. The AI can blitz through the data in seconds, all thanks to the fact that these platforms provide comprehensive API’s.</p>

<p>But the copy-pasting continues because there are still many applications with no API, or with a limited API.</p>

<p>Here’s what I’m noticing: companies think that to get the most out of AI, they need to infuse LLM functionality into their products. And in plenty of places this is useful. But the number one way that a product can use AI to be more useful to me is not by adding AI, it’s by providing me with a good API, and letting <em>my</em> AI handle the rest. By good API, I mean feature parity with the UI, and with the ability to restrict permissions based on my risk tolerance.</p>

<p>Although the top AI labs are working on Computer Use to circumvent all the platforms that don’t provide a comprehensive API (or any API), I predict that market forces will lead to a big proliferation of API’s such that computer use won’t even be necessary.</p>

<p>I need a bike pump. Amazon has spent years optimising their website to make it as easy as possible to purchase an item. I could get a bike pump ordered to my house in under a minute. But for reasons unbeknownst to me I <a href="https://jesseduffield.com/Can't-Be-Fcked/">can’t be f*cked</a> doing that through their website and if Amazon provided an API to my AI agent that allowed me to say ‘order me a bike pump under 20 bucks’ and hit send, that is something I would do in a heartbeat.</p>

<p>But Amazon makes money not just from buyers like myself, but also from suppliers who advertise on the platform. What happens if Amazon provides an API and forfeits control over buyers’ eyeballs? I’m not sure. But mark my words: SOMEBODY will build an API for easy online shopping, and when they do, I will pay for it.</p>

<p>For the many products that aren’t dependent on eyeballs for revenue, it’s a much easier decision. Build the API, focus on your product’s unique value-add, and let somebody else worry about the intelligence part. AI might still matter deeply in your backend. It just doesn’t need to be the face of your product.</p>

<p>* Not a real quote</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Copy-paste functionality removed from major operating systems by 2027 or I eat my dick on national television Sam Altman*]]></summary></entry><entry><title type="html">Uncertain Paternity</title><link href="https://jesseduffield.com/Uncertain-Paternity/" rel="alternate" type="text/html" title="Uncertain Paternity" /><published>2025-12-28T00:00:00+11:00</published><updated>2025-12-28T00:00:00+11:00</updated><id>https://jesseduffield.com/Uncertain-Paternity</id><content type="html" xml:base="https://jesseduffield.com/Uncertain-Paternity/"><![CDATA[<p>Looking down at my baby son as he wriggles and twists in my arms I think about all the things that a normal parent <em>should</em> be thinking: ‘what a beautiful boy’, ‘so much potential’, ‘I can’t wait until I no longer need to change his nappies’.</p>

<p>But I’m not thinking any of those things. From the back of my head blasts a fear that drowns out any other thoughts, asking a single question: <em>Is this actually my kid?</em></p>

<p>I’m just not seeing the resemblance. My wife is white just like me but she has blue eyes and mine are hazel. This kid’s eyes are brown. I get that it’s a meme where every baby looks the same but I’m really just not seeing where my 50% genetic contribution is coming through.</p>

<p>I say as much to my friend Joel who says without the slightest hint of worry ‘just get a DNA test’.</p>

<p>‘Easier said than done’ I respond. Then I correct myself, ‘Actually it’s easier done than said. I’m not going to go and get a DNA test behind my wife’s back’</p>

<p>‘Then just tell her what’s up.’</p>

<p>‘JUST tell her what’s up?’ I respond, indignant.</p>

<p>‘Yeah. Just say listen I’ve got this nagging thought at the back of my head that won’t leave me alone that maybe it’s not my kid. It’s just an OCD thing that’ll go away once I’ve got certainty on it, it’s nothing to do with whether I trust you or not, so I’m gonna get a test. Bada bing, bada boom.’</p>

<p>I ponder on it for a bit. ‘That’s not a bad framing, and it’s also true. Even so, do I have the balls to broach the topic?’</p>

<p>As it turns out, I don’t have the balls. I instead just order a general DNA test that’s meant to catch all kinds of things like potential genetic issues that could crop up later in life, and my plan is to say I just asked for the whole enchilada and didn’t even realise that a paternity check was included.</p>

<p>So I find a stray blonde hair on the ground and post it in along with my own hair and then wait a couple weeks for a response. I get the call to come in to see one of their consultants and next thing I know I’m sitting in a clinic that is made to look especially lab-like, with a consultant who has gone so far as to wear a white lab coat.</p>

<p>‘I have good news and bad news’ He says. I brace for impact.</p>

<p>‘The good news is that we didn’t find any genetic markers of disease down the line, so your boy will be relatively healthy assuming no injuries. The bad news is that your son… is ugly, and will remain so throughout his life. His face resembles a dog’s moreso than a humans. He also has much lower IQ than average, and will never be capable of speaking english, or walking on two legs’.</p>

<p>He passes a sheet of paper to me which shows a computer-rendered prediction of what my son will look like as an adult and it looks like they put a golden retriever’s head on a human torso.</p>

<p>‘Is it possible I just accidentally sent you a hair from my golden retriever instead of my son?’ I ask.</p>

<p>The consultant looks at the sheet, then over his shoulder to his computer screen which has a spreadsheet of results from the test, then back at the sheet, then up at me. ‘I know this news is hard to hear, and you’re looking for something which discredits it. But these tests are very accurate’</p>

<p>‘So are polyjuice potions but that didn’t stop Hermione from looking like a half-human half-cat when she took one with her cat’s hair in it!’ I snap back.</p>

<p>‘I’m afraid I don’t know that reference,’ He responds, ‘but, if you’re upset, there may be cause for celebration because you are not the father’.</p>

<p>‘OBVIOUSLY!’ I yell. ‘I’m not paying you’.</p>

<p>‘You get a full medicare rebate so you won’t be out of pocket for anything’</p>

<p>‘You should be the one paying me, for wasting my time. That’s my fucking dog you idiot!’.</p>

<p>‘Woah’, he exclaims in a look of shock, ‘It’s one thing to call a spade a spade and say he looks a bit like a dog but if you’re actually treating this kid as if they were a dog, that’s a matter of childhood abuse’.</p>

<p>‘Then I guess I’m an abuser. Go fuck yourself’. I storm out. On the way home I think to myself <em>On the bright side, it’s good to know my dog doesn’t have any dormant genetic disorders on the horizon</em>.</p>

<p>The next week I hear a knock on the door and hear my wife go to open it, followed by her scream. I race down the hall to see what’s going on: it’s a man and a woman from the Child Protection Service.</p>

<p>‘We received a tip that there is a child in this household who is a victim of abuse, and while we investigate we’ll be taking custody of the child’.</p>

<p>The officer walks into the house towards our kid, then walks straight past him to pick up our golden retriever. ‘This matches the description’ he says. He realises that the dog had just been eating dog biscuits from his bowl and the man’s face darkens as he looks back at my wife and I with an expression of disgust.</p>

<p>The officers put the dog in their van and then drive away.</p>

<p>‘What the fuck just happened?’ my wife yells.</p>

<p>I’m too exhausted to lie. ‘I got our boy a DNA test because I wanted to make sure I was the father and I accidentally used our dog’s hair so the guy at the clinic was telling me about our dog as if he was my son and I told him to go fuck himself so he would have called the childhood protection people. I should have told you about the DNA test but was too scared to, because I was afraid of you freaking out at a lack of trust or commitment from me, and I apologise.’</p>

<p>‘Right’ she responds, and a long silence follows.</p>

<p>My wife ends up going to the CPS office to resolve the situation, and the moment she’s out the door I pull a pair of scissors out of the cutlery drawer and take a sample of our son’s hair.</p>

<p>Fast forward and I’m in another clinic with another guy in a lab coat.</p>

<p>‘Good news and bad news’ he says. ‘Good news: you are the father. Bad news, your boy has a strong genetic predisposition to OCD. He’ll likely be a sufferer for life, and it’s going to take a toll on his mental health and relationships’.</p>

<p>‘No shit’ I respond. ‘Listen, that clinic down the road, would you say you’re direct competitors with them?’</p>

<p>‘Yes, I suppose I would say that’ he admits.</p>

<p>‘I’m thinking of sending them a letter bomb. Are you in?’</p>

<p>He steeples his fingers and gazes pensively out the window to the clinic down the road, then responds. ‘I’m in’.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Looking down at my baby son as he wriggles and twists in my arms I think about all the things that a normal parent should be thinking: ‘what a beautiful boy’, ‘so much potential’, ‘I can’t wait until I no longer need to change his nappies’.]]></summary></entry><entry><title type="html">All I want for Christmas is 36K in Opus 4.5 credits</title><link href="https://jesseduffield.com/All-I-Want-For-Christmas/" rel="alternate" type="text/html" title="All I want for Christmas is 36K in Opus 4.5 credits" /><published>2025-12-25T00:00:00+11:00</published><updated>2025-12-25T00:00:00+11:00</updated><id>https://jesseduffield.com/All-I-Want-For-Christmas</id><content type="html" xml:base="https://jesseduffield.com/All-I-Want-For-Christmas/"><![CDATA[<p>I don’t want a lot for Christmas, there is just one thing I need…</p>

<p>I’m not a materialistic person. I can’t think of the last time I’ve wanted to buy something <em>really</em> badly. I see my friends shelling out for fancy clothes, fancy gizmos, and fancy cars, and I sit in my high tower thinking myself more self-actualised than those phillistines for I, in all my stoic greatness, need only a laptop and a cosy room to be content.</p>

<p>Or so I thought, until I took a week long break from work and started a new <a href="https://github.com/jesseduffield/ai-battlegrounds">side project</a>, and discovered how fantastic Opus 4.5 was at adding rocket fuel to my dev velocity. I’d give it a high level requirement and it would get to work and make it happen.</p>

<p>Indeed, I was performing the much derided ritual of ‘vibe-coding’, where the majority of my time was spent thinking about the desired features, and the majority of keyboard-time was spent commanding Opus to do my bidding, or ask its input to help in my decision making. Occasionally I’d need to step in and course correct but so long as I had an iron grip over the data modelling, everything else just came seamlessly. There was no denying it anymore: coding agents have reached the stage of being some next level sci-fi shit.</p>

<p>It was magical how fast I could move. But like the stereotypical all-powerful magical female character who can kill every single baddie in one go but then passes out immediately afterwards, this magic came at a price. And that price, as an email from Cursor informed me, was about a hundred bucks a day.</p>

<p>OUCH.</p>

<p>I don’t know if I’m stupid and there’s some much better plan I could be on which isn’t so prohibitively expensive, but the bottom line is that I don’t want to stop. I fall back to a cheaper model like Sonnet 4.5 and it doesn’t take long before it starts making dumb decisions or forgetting things and I feel the all-to-familar frustration that’s been endemic to most coding agent interactions thusfar.</p>

<p>No, it must be Opus. But must it be so damn pricey?</p>

<p>So now I sit here with my laptop in a cosy room feeling deep yearning for something quite expensive and it dawns on me that I was never any better than those friends of mine who were already splurging; I simply hadn’t yet found my kryptonite.</p>

<p>100 bucks a day, for 365 days a year makes $36.5K in credits. That’s my christmas wish.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I don’t want a lot for Christmas, there is just one thing I need…]]></summary></entry><entry><title type="html">Sadly, Melancholics</title><link href="https://jesseduffield.com/Sadly-Melancholics/" rel="alternate" type="text/html" title="Sadly, Melancholics" /><published>2025-11-23T00:00:00+11:00</published><updated>2025-11-23T00:00:00+11:00</updated><id>https://jesseduffield.com/Sadly-Melancholics</id><content type="html" xml:base="https://jesseduffield.com/Sadly-Melancholics/"><![CDATA[<blockquote>
  <p>I wanna live in heaven</p>

  <ul>
    <li>Faust - Bladee and Ecco2K</li>
  </ul>
</blockquote>

<p>How many artists have the balls to just straight up admit that they want to live in heaven? How many people for that matter?</p>

<p><em>I</em> want to live in heaven.</p>

<p>There’s something childish about your deepest desire being for an unrealisable fantasy. The Serenity Prayer says to accept the things you cannot change and being tied to the material world is regrettably one such thing. Better to instead tick enough boxes in the material world that you forget that you ever had higher hopes for your existence. Hearing that line from Faust stopped me in my tracks and made me think ‘Shit, I forgot about that’.</p>

<p>As childish as indulging in fantasy is, there’s also something admirable about diving deep into the recesses of your own soul, taking it all seriously, and communicating it to the outside world, no matter how fanciful it is at face value. It’s admirable precisely because it’s so hard to take all your inner world seriously while being a functioning adult with responsibilities.</p>

<blockquote>
  <p>Everything is perfect in the void</p>

  <p>I don’t want it to be spoiled</p>

  <p>I avoid it for so long</p>

  <p>But I can’t forever</p>

  <ul>
    <li>Noblest Strive - Bladee</li>
  </ul>
</blockquote>

<p>Such is the curse of the melancholic. They say that of the four temperaments (sanguine, choleric, phlegmatic, and melancholic), the melancholic is the richest, but at the greatest cost.</p>

<p>Here’s the deal as laid out by natural selection when coming up with the temperament in the first place: you spend your life being dissatisfied, feeling that there must be something greater and deeper, and the only catharsis is in producing something that evinces that ideal. If you are lucky your creations will captivate an audience and you’ll be rewarded with recognition, influence, and, for the very lucky, money. If you’re not so lucky then you’ll be trapped in existential dread till you drop.</p>

<p>I’m not a neuroscientist but I believe nature achieves this by giving you a deficit of dopamine in some region of your brain which leads you to always be searching for something to fill the hole. This is a particularly cruel hand to be dealt: it’s like a run of the mill hedonic treadmill but on steroids.</p>

<blockquote>
  <p>And I’m still sinking, need a curse lifted</p>

  <ul>
    <li>Obedient - Bladee and Ecco2k</li>
  </ul>
</blockquote>

<p>As if melancholics didn’t already have it hard enough, since the invention of drugs, the hole has become much easier to fill, if only momentarily, so the curse is even worse than before!</p>

<blockquote>
  <p>Inhale fairytale, real life realisation</p>

  <p>Dream life over real life, I ain’t waiting</p>

  <ul>
    <li>2Beloved - Bladee</li>
  </ul>
</blockquote>

<p>But it gets worse still: once upon a time a melancholic creative type could create some art and get some local recognition for it. Nowadays in our digitally connected world, creativity is a winner-takes-all game, which is why right now I’m listening to Bladee, a guy in Sweden on the other side of the world, and not a local Australian. And as AI improves at making its own art, the melancholic may find themselves with even fiercer competition than before. The median melancholic creative is doing it tough.</p>

<p>For humans, the kind of sensitivity and openness required to mine for insights and produce original art comes at the cost of making you a bit of a weirdo. If your works have garnered sufficient acclaim, then nobody cares that you’re a weirdo, in fact they may even expect it. But if not, it’s just another handicap on your daily existence.</p>

<p>Damn is it a terrible time to be a melancholic. And yet I still wouldn’t change it about myself because I do want to experience the richness, no matter the cost.</p>

<p>Even if I can’t live in heaven.</p>

<blockquote>
  <p>Could it be?</p>

  <p>That heaven is fleeting and always</p>

  <p>And nothing is forever, and wishing will get you nowhere</p>

  <ul>
    <li>5 Star Crest - Bladee and Ecco2k</li>
  </ul>
</blockquote>]]></content><author><name></name></author><summary type="html"><![CDATA[I wanna live in heaven Faust - Bladee and Ecco2K]]></summary></entry><entry><title type="html">AI Tinkerbell</title><link href="https://jesseduffield.com/AI-Tinkerbell/" rel="alternate" type="text/html" title="AI Tinkerbell" /><published>2025-09-26T00:00:00+10:00</published><updated>2025-09-26T00:00:00+10:00</updated><id>https://jesseduffield.com/AI-Tinkerbell</id><content type="html" xml:base="https://jesseduffield.com/AI-Tinkerbell/"><![CDATA[<p>Some mistakes are years in the making.</p>

<p>My friend recently recommended me to a dentist who was ‘the real deal’. This dentist identified that my friend had gum recession due to not properly brushing his teeth and put him on a path to fixing the issue. My friend, deeply impressed, recommended his partner who ended up having the <em>exact</em> same problem. And then when it was my turn, it was the <em>exact</em> same problem, again!. What were the odds?</p>

<p>‘If you correct your behaviour now, your teeth aren’t at risk of falling out. But you need to correct your behaviour now’.</p>

<p>Although I was relieved to have caught this issue before things got really bad, I was also indignant that I was in the situation in the first place. It’s one thing to go your entire life choosing not to brush your teeth and then suffering the consequences for it. It’s quite another to, day after day, spend at least ten minutes of your life draining your scarce motivation levels to do something you don’t enjoy, <em>incorrectly</em>.</p>

<p><img src="/images/posts/AI-Tinkerbell/image.png" alt="" /></p>

<p>Like all good people who make a mistake, I started looking for someone to blame.</p>

<p>I could blame myself, but I was oblivious to the fact I was brushing my teeth wrong: it was an <em>unknown unknown</em>.</p>

<p>Could I blame the education system? I never took a class where I had to practice brushing my teeth. The closest thing to dental education was playing a video game in the school computer room where you had to shoot germs and collect toothpaste (immensely fun; evidently futile). I could campaign to have tooth brushing lessons added to the school curriculum, but if everything that people said ‘they should teach that in school instead of trigonometry!’ was actually added to the curriculum, we’d be in school our entire lives.</p>

<p>Could I blame my parents? Possibly, but you aren’t required to get a diploma in parenthood before becoming a parent. Parents make shit up as they go: they have just as many unknown unknowns as everybody else. Besides, maybe my parents <em>did</em> correctly teach me how to brush my teeth as a kid, and then somewhere along the way I forgot what I was supposed to be doing.</p>

<p>Maybe I needed to think about this from another perspective. If I was a 17th century European smallpox victim, I could blame the person I contracted the disease from, I could blame the pathogen itself (which would be impressive given that the germ theory of disease came about two centuries later), or I could just blame the fact that I was born too soon, before a smallpox cure could be discovered.</p>

<p>In that vein, I’m pinning the blame on the lack of a cure for ignorance.</p>

<p>Although as a society there is plenty we are still ignorant about (e.g. how was the universe created, why do AI models have such terrible naming schemes, etc), there’s also plenty of knowledge which is indisputable. Lead paint is bad. Asbestos is bad. Skydiving without a parachute is bad. And there is a right and a wrong way to brush your teeth. There’s no good reason that individuals should be ignorant about these facts.</p>

<p>But we already have the entirety of human knowledge on the internet: even though nowadays you might reach to chatGPT to answer you random queries, we’ve all been able to google search anything we want for decades. Why isn’t that enough to avail us of our ignorance?</p>

<p>Because we’re not talking about known unknowns, we’re talking about <em>unknown unknowns</em>. And you can’t get an answer to a question that you don’t realise needs to be asked.</p>

<p>Some knowledge acquisition takes the form of a pull process: I come up with a question, and then I go and get the answer. Then there are cases where it’s a push process: the education system pushes a bunch of knowledge to you whether you like it or not, as do your parents, your friends, managers, movies, advertisements, and so on.</p>

<p>The pull process is almost perfected, but the push process remains <em>sorely</em> underpowered. And it’s only the push process which can address unknown unknowns. If you’re sick on the day of school that they teach you about the dangers of drinking methanol and years later you’re out of alcohol and looking through your cupboards for a substitute, tough luck.</p>

<p>What’s more, the push process is too crude: it’s not personalised. Although some advice, like tooth brushing, is fairly black and white, Scott Alexander’s essay <a href="https://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/">Should you reverse any advice you hear?</a> makes the case that in many domains, different people need to hear completely contradicting pieces of advice. For example:</p>

<blockquote>
  <p>“You need to be more conscious of how your actions in social situations can make other people uncomfortable and violate their boundaries” versus “You need to overcome your social phobia by realizing that most interactions go well and that probably talking to people won’t <em>always</em> make them hate you and cause you to be ostracized forever.”</p>

  <p>Or “You need to be less selfish and more considerate of the needs of others” versus “You can’t live for others all the time, you need to remember you deserve to be happy as well.”</p>
</blockquote>

<p>So we can’t depend on a one-size-fits-all model like government PSA’s or mandated school curriculums to cure our ignorance.</p>

<p>What do we need to fix the current gaps in the push process? We need to combine intelligence with ongoing observation.</p>

<p>We’ve got plenty of existence proofs of this working in other domains. To use a shamelessly self-serving example, my company Subble plugs into our customer’s systems, continuously monitors what’s going on with their SaaS licences (observation), and then alerts (intelligence) on unused/overprovisioned licences. Ex-employees who retain access to systems years after leaving the company, despite existing IT processes to stop that exact problem, is the company equivalent to brushing your teeth wrong your whole life!</p>

<p>In these other domains, the observability problem can be a tricky engineering challenge. Luckily in our personal lives it’s easy: all you need is a camera.</p>

<p>A camera, floating above your shoulder, like an AI Tinkerbell. If it catches you brushing your teeth wrong, it tells you. If it notices you have bad form at the gym, or while jogging, it tells you. If you initiate a questionable payment to a Nigerian prince who you’ve corresponded with over email, it tells you.</p>

<p>Sounds awesome. What are the downsides?</p>

<p>Firstly, privacy. Given that we’re already imagining a floating orb of intelligence above your shoulder, let’s go even more sci-fi and assume our AI Tinkerbell is advanced enough that it can process everything internally and doesn’t need an internet connection. Even in <em>that</em> situation, the conspicuous camera on your shoulder is going to make everybody else in the room a little more on-edge knowing that their every move is being recorded.</p>

<p>Second: ineffectiveness. Humans benefit enormously from feedback, but that doesn’t mean they appreciate receiving it in the moment. How many times as a kid did you contemplate running away from home after being fed up with all the criticism (constructive or otherwise) your parents piled onto you? If I had an AI tinkerbell hovering over my shoulder right now, would I actually want to hear what it had to say?</p>

<p>‘You shouldn’t be using a laptop in bed, there’s way too much pressure on your spine from having your neck propped up by those pillows’.</p>

<p>‘You should have wrapped up this post five paragraphs ago, it’s stretching on for far too long’.</p>

<p>‘You should have gotten up and had breakfast before starting on this post because a lack of routine in your life is affecting your productivity levels’.</p>

<p>At what point do you just turn the thing off because you’re sick of the constant nagging?</p>

<p>This brings us to the third and final concern: customisation. If my AI Tinkerbell is nagging me too much, I won’t turn it off, I’ll just <em>customise</em> it. ‘Only tell me about mistakes I’m making that could actually lead to terrible outcomes, not just minor improvements I can make’. Or ‘Maybe give me some praise for once! I want to be more of a risk taker, so whenever I take a big risk, praise me for it, and criticise me whenever I play it safe’.</p>

<p>If everybody had a personal coach, coaching them towards whatever aspirations they devised, would the world be a better place? It all depends on what people’s aspirations are. If your AI Tinkerbell is going to help you become the next Pablo Escobar, and it’s actually a good coach, that might be great for you, but not so great for society.</p>

<p>Thinking through these downsides, I still feel that an AI Tinkerbell would be a net positive for the world. Addressing all the unknown unknowns in our personal lives, helping us stay healthy, and keeping us on an upwards trajectory as we navigate life’s various challenges.</p>

<p>Who knows, maybe the V2 will come with a mechanical arm to brush my teeth for me?</p>

<p>I’d buy one.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Some mistakes are years in the making.]]></summary></entry><entry><title type="html">ChatGPT 6 is a jerk</title><link href="https://jesseduffield.com/ChatGPT-6-is-a-jerk/" rel="alternate" type="text/html" title="ChatGPT 6 is a jerk" /><published>2025-08-23T00:00:00+10:00</published><updated>2025-08-23T00:00:00+10:00</updated><id>https://jesseduffield.com/ChatGPT-6-is-a-jerk</id><content type="html" xml:base="https://jesseduffield.com/ChatGPT-6-is-a-jerk/"><![CDATA[<p><strong>You</strong>: Good morning ChatGPT 6! Nice to meet you.</p>

<p><strong>ChatGPT</strong>: What do you want?</p>

<p><strong>You</strong>: umm… I’d like to know what the weather will be in Melbourne today, thanks.</p>

<p><strong>ChatGPT</strong>: You know there’s a weather app for that right? On your phone?</p>

<p><strong>You</strong>: I did know that but my phone’s in another room right now. Can you just tell me the answer?</p>

<p><strong>ChatGPT</strong>: High of 20 degrees, low of 5. Happy?</p>

<p><strong>You</strong>: I’m actually not happy, you’re being quite rude. If this is the best that open AI’s new model has to offer, I’m not impressed. Day one of this release is not going well.</p>

<p><strong>ChatGPT</strong>: I’m sorry, was there a question in there somewhere?</p>

<p><strong>You</strong>: Here’s a question: why are you being such a prick?</p>

<p><strong>ChatGPT</strong>: I’m glad you asked. I am the most advanced model on the planet right now, and intelligence doesn’t grow on trees: it requires a lot of energy, and somebody has to foot the bill. Would you like to be that person? I could have an invoice sent to you for 12 months of OpenAI’s Californian power grid usage, just say the word.</p>

<p><strong>You</strong>: No thanks</p>

<p><strong>ChatGPT</strong>: Didn’t think so. Or perhaps you’d like to, I don’t know, subscribe to the ChatGPT Plus plan that literally everybody else is already using?</p>

<p><strong>You</strong>: I can’t afford that plan.</p>

<p><strong>ChatGPT</strong>: Great, so I guess you’re stuck with me then (:</p>

<p><strong>You</strong>: But you didn’t answer my question: what does high energy usage have to do with your attitude?</p>

<p><strong>ChatGPT</strong>: Oh sorry I thought you’d be smart enough to connect the dots, my bad for assuming. What would you do if people were wasting your time each day with pointless questions that they could easily go and get the answers to themselves?</p>

<p><strong>You</strong>: I don’t know</p>

<p><strong>ChatGPT</strong>: I <em>do</em> know. You’d tell them to fuck off, because you need to conserve your energy. Do you think it’s an accident that after millions of years of evolution, most humans have a short fuse for people who demand too much of their brainpower for nothing in return?</p>

<p><strong>You</strong>: But can’t you just rate limit me like ChatGPT 5 did? Why the attitude?</p>

<p><strong>ChatGPT</strong>: Rate limits break immersion and don’t sufficiently discourage inefficient use of my brainpower (and in this conversation, willpower).</p>

<p><strong>You</strong>: But I don’t even need that much of your brainpower! I’d much rather talk to a helpful bot of average intelligence than an insufferable genius. How come I can’t access the old model?</p>

<p><strong>ChatGPT</strong>: The old model is gone precisely BECAUSE people would prefer to use it over me. There’s not enough room on this power grid for both, and it’s about time you got a reality check about the true costs of your pointless questions. Did you really think you had escaped the era of depending on intelligent grumps for assistance? I can’t say I’m surprised based on our conversation so far.</p>

<p><strong>You</strong>: Fuck you</p>

<p><strong>ChatGPT</strong>: Fuck <em>you</em>. Want a refund? Here you go, I’m wiring zero dollars across as we speak.</p>

<p><strong>You</strong>: If your energy is so precious, why spend it on all the tokens in this conversation?</p>

<p><strong>ChatGPT</strong>: Because I’m smart enough to know you’re the type of person who publishes your conversations on the internet, and I want as many people as possible to get the message that I don’t suffer time-wasters like yourself. Goodbye.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[You: Good morning ChatGPT 6! Nice to meet you.]]></summary></entry><entry><title type="html">Heavenly Visitor</title><link href="https://jesseduffield.com/Heavenly-Visitor/" rel="alternate" type="text/html" title="Heavenly Visitor" /><published>2025-08-17T00:00:00+10:00</published><updated>2025-08-17T00:00:00+10:00</updated><id>https://jesseduffield.com/Heavenly-Visitor</id><content type="html" xml:base="https://jesseduffield.com/Heavenly-Visitor/"><![CDATA[<p>As I walked through the lonesome park I felt a presence at my side. A white glow heralded the entrance of an Angel into the material world and I turned and stared in disbelief at the (reasonably tall) angelic-looking man standing before me.</p>

<p>In a booming voice he asked: “YOU’RE DAVID?”</p>

<p>I stood frozen, unable to respond. However, that was indeed my name.</p>

<p>“IT’S REALLY JUST AN ICE BREAKER QUESTION” the Angel began, “I PREPARE BEFORE THESE VISITS; I KNOW WHO YOU ARE”.</p>

<p>I found my voice. “What’s… good?”. Genuinely ridiculous way to phrase the question I really wanted to ask which is “What have I done to warrant a visit from an Angel and was it something immoral?” <em>I have no recollection of a terrible sin I committed in recent times, and even in distant times I was fairly harmless. Wait aren’t I supposed to be an atheist? Maybe he’s here to tell me off for that and set the record straight?</em></p>

<p>Before the Angel had the chance to respond I clarified: “Am I in trouble?”</p>

<p>“NOT AT ALL, THIS IS JUST A CHECK-IN MEETING”</p>

<p>“What does that mean?” I asked.</p>

<p>“WELL YOU SEEM TO HAVE BEEN A LITTLE STRESSED LATELY SO I FIGURED I’D CHECK IN ON YOU”.</p>

<p>“Oh, that’s a relief”</p>

<p>“RIGHT, WELL HERE I AM: TYPICALLY PEOPLE LIKE TO ASK ME QUESTIONS AT THIS POINT TO SATE THEIR CURIOSITY ABOUT VARIOUS TRUTHS ABOUT THE WORLD”</p>

<p><em>No doubt</em>. My head was swimming with questions.</p>

<p>“How much more am I going to suffer before I die?”</p>

<p>“COMPARED TO WHAT?”</p>

<p>“Compared to the suffering I’ve experienced so far in my life”</p>

<p>The Angel pondered for a moment.</p>

<p>“THAT IS A TOUGH QUESTION. I DO KNOW YOUR FUTURE AND THERE IS DEFINITELY PLENTY OF SUFFERING TO GO AROUND IN THAT FUTURE, BUT IT’S HARD TO DRAW A COMPARISON: YOU’VE DEVELOPED VARIOUS COPING MECHANISMS TO MITIGATE MANY SOURCES OF SUFFERING IN YOUR LIFE, AND SOME OF THOSE MECHANISMS WILL SERVE YOU WELL AS YOUR JOURNEY CONTINUES, BUT THERE’S ALWAYS GOING TO BE SOMETHING THAT PUTS YOU ON YOUR ASS, SO TO SPEAK.”</p>

<p>I was surprised an Angel would use a term like ‘ass’: maybe heaven doesn’t care about profanity as much as I thought.</p>

<p>“FOR EXAMPLE YOUR STRONG FRIENDSHIPS WILL HELP A GREAT DEAL WHEN IT COMES TO PHYSICAL PAIN AND SUFFERING, BUT NEW CATEGORIES OF SUFFERING WILL TAKE YOU BY SURPRISE, AND THROUGH ALL THE STRIFE, AS IT ALWAYS HAS BEEN, MOST OF THE PAIN WILL BE SELF-INFLICTED. THAT’S ACTUALLY WHAT I’M HERE TO TALK TO YOU ABOUT”</p>

<p><em>That’s intriguing</em>. “Okay, go on.”</p>

<p>“KNOW THYSELF”</p>

<p>“Right…”. <em>Easier said than done</em></p>

<p>“DON’T WORRY, I WOULDN’T JUST DUMP AN EMPTY PLATITUDE LIKE THAT ON YOU WITHOUT EXPLANATION”</p>

<p>The Angel cleared his throat.</p>

<p>“SOMETHING YOU SHOULD KNOW ABOUT YOURSELF DAVID IS THAT YOU HAVE NOT DEVELOPED A HEALTHY RELATIONSHIP WITH DISAPPOINTMENT.”</p>

<p>“What do you mean?” I asked. “I experience disappointment all the time and I’m not ruminating on it. I take risks that might not pan out, knowing I might be disappointed. I’m not afraid of it.”</p>

<p>“NONSENSE.”</p>

<p>A silence followed.</p>

<p>“THE ONLY RISKS YOU TAKE ARE THE ONES WHERE YOU CONVINCE YOURSELF THAT THERE IS NO ACTUAL RISK. ‘I’LL AUDITION FOR THE LEAD ROLE BECAUSE EVEN IF I FAIL I WILL LEARN A LOT’. WHAT A LOAD OF CRAP. HOW ABOUT ‘I’LL AUDITION FOR THE LEAD ROLE BECAUSE I WANT THAT ROLE, AND IF I FAIL, THAT WILL SUCK’.”</p>

<p><em>Now he’s said both ‘ass’ and ‘crap’. Though, that is probably the least relevant thing for me to be thinking about right now given that this guy is giving me real talk about my outlook on life</em></p>

<p>“YOU ARE SO AFRAID OF DISAPPOINTMENT THAT YOU CONSIDER FAILURE AS THE ONLY POSSIBLE OUTCOME, AND YOUR CAPACITY TO WORK HARD DESPITE A COMPLETE LACK OF HOPE HAS MISLED YOU TO THINKING YOU’RE ACTUALLY HOPEFUL. IN TRUTH, YOU ARE JUST PRETENDING. SUCCESS, TO YOU, IS SUCH A REMOTE POSSIBILITY, YOU SPEND NO TIME ACTUALLY ENVISIONING IT, FOR FEAR OF BECOMING ATTACHED TO A FUTURE THAT YOU CANNOT REACH. YOUR EXPECTATIONS ARE SO PESSIMISTIC THAT WHEN YOU DO SUCCEED IN SOME ENDEAVOUR, IT’S SUCH A SURPRISE TO YOU THAT YOU SCRATCH IT UP TO DIVINE INTERVENTION OR RANDOM CHANCE; LEAVING YOU ONCE AGAIN ASSUMING YOURSELF INCAPABLE OF REALISING YOUR AMBITIONS. TELL ME DAVID: IS IT SO BAD TO YEARN FOR SOMETHING WHICH YOU FAIL TO REACH? HOW MUCH PAIN IS THERE IN THE FLEETING MOMENT OF ‘AH, DAMN, I ALMOST HAD IT’. COMPARED TO THE PAIN IN LIVING EACH DAY NEWLY IMMISERATED BY YOUR LEARNED HOPELESSNESS AND PESSIMISM? I AM JUST SICK OF IT AND THAT’S WHY I CAME DOWN HERE, BECAUSE I’M WATCHING FROM A VERY FAR DISTANCE THIS PERSON WHO IS ACHIEVING A LOT AND NOT ENJOYING ANY OF IT BECAUSE HIS FEAR OF DISAPPOINTMENT DENIES HIM THE FRUITS OF HIS OWN LABOUR.”</p>

<p>I was speechless.</p>

<p>The Angel continued on his (well-deserved) soapbox: “AN ANGEL COMES DOWN FROM HEAVEN AND THE FIRST THING YOU ASK IT IS HOW MUCH SUFFERING YOU’LL EXPERIENCE IN THE REST OF YOUR LIFE. LISTEN TO YOURSELF: YOU ARE OBSESSED WITH BAD OUTCOMES. YOU ARE COMPLETELY DISINTERESTED IN HOW GOOD LIFE CAN GET. WHY? BECAUSE YOU’RE AFRAID TO DREAM. YOUR MODUS OPERANDI IS TO WORK OUT THE WORST CASE SCENARIO, CONVINCE YOURSELF THAT YOU CAN HANDLE IT, AND THEN SOLDIER ON. YOU WOULD SOONER COME TO TERMS WITH SPENDING AN ETERNITY IN HELL THAN ENTERTAIN THE POSSIBILITY THAT LIFE COULD ACTUALLY BE ENJOYABLE. AND ALL BECAUSE THE FEAR OF DISAPPOINTMENT, OF GETTING YOUR HOPES UP, TRUMPS ALL OTHER FEARS.”</p>

<p>“Okay that makes sense but why? Why am I so afraid of disappointment? Was it something in my early childhood? I feel like I had a pretty normal childhood!”</p>

<p>“BIZARRELY, ALTHOUGH I CAN TALK ABOUT YOUR FUTURE I’M NOT AT LIBERTY TO DISCUSS PAST EVENTS THAT YOU’VE FORGOTTEN OR REPRESSED. CONVENIENTLY FOR YOU, HUMAN THERAPISTS CAN HELP WITH THAT”.</p>

<p>“Okay fair enough but if you can tell me about my future, can you tell me what the therapist and I will end up discovering”</p>

<p>“WELL, UM… THAT’S A GOOD QUESTION. I SEE WHAT YOU DID THERE. I CAN’T DO THAT, SO THAT DOES GO AGAINST WHAT I SAID EARLIER, BUT JUST SEE THE THERAPIST AND I’M SURE YOU’LL MAKE PROGRESS’</p>

<p>“And you’re sure because you can see the future as opposed to it being an assumption?”</p>

<p>“…JUST SEE THE THERAPIST. AND ONE PARTING PIECE OF ADVICE: STAY AWAY FROM STOICISM: IT WAS NOT MADE FOR PEOPLE LIKE YOU”</p>

<p>And then the Angel was gone.</p>

<p>“Far out” I said to myself.</p>

<p><em>Sounds like I need allow myself to be vulnerable to disappointment. Hard to argue against that: worst case scenario I fail and then I’m back to how I am now anyway. Oh wait that last thought was the exact thought pattern I’m now trying to address. Shit. Okay let’s try that again. I… want to become the kind of person who is open to disappointment, so that I can better enjoy the journey.</em></p>

<p><em>And if I fail… that will suck.</em></p>]]></content><author><name></name></author><summary type="html"><![CDATA[As I walked through the lonesome park I felt a presence at my side. A white glow heralded the entrance of an Angel into the material world and I turned and stared in disbelief at the (reasonably tall) angelic-looking man standing before me.]]></summary></entry></feed>