Adam Jones|HomeBlog

The post-AGI purpose problem

Headshot of Adam Jones

Adam Jones

We're racing towards a world where artificial intelligence might match or exceed human capabilities across virtually every domain. This poses many incredibly difficult problems with very little time to solve them: preventing catastrophic misuse, aligning systems, coordinating actors, and not devolving into an AI-enabled oligarchy.

The challenge

What if we succeed? This really only gets us to the starting line: people aren’t dead and can just about feed themselves. Not exactly a glowing vision of the future. To do really well here, we need to face a deep question:

What makes life meaningful in a world where AI outperforms humans at everything?

This isn't just a thought experiment - it's a practical challenge we might face in our lifetimes.

What does a great future look like for humans? What kind of activities make life worth living when humans aren’t really needed?

You might say "whatever people want!" - but today, people often find meaning through relationships, achievement, work, and contribution to society. In this future world:

  • How do relationships evolve when AI companions can be perfect friends, partners, or mentors?
  • What achievements feel meaningful when we must consciously carve out spaces for human challenge, rather than finding them naturally in the world?
  • What gives work purpose when anything productive can be done better by AI?
  • How do you contribute to society when anything you can do, an AI could do better?

We've never faced this before. Previous technological revolutions opened new frontiers even as they closed old ones. But AI might close all frontiers - or at least all the ones humans can meaningfully participate in.

What about art / poetry / video games / sports?

This might be a small part of the solution to the purpose problem! Understanding what could give purpose to at least some of the population is a good step.

However, while you might enjoy these activities, they would not provide purpose for everyone.

T-shirt with the text 'Eat, Sleep, Game, Repeat'

Despite the popularity of these kinds of shirts in the 2010s, I'm not sure everyone is bought in to this plan... (Image credit: ClairWalker, CC BY-NC 4.0)

And longer term, this is harder. If you really think about it, would you still be excited about this after 40 years?

Achievements can still be meaningful even with AI!

A child feels genuine pride in scoring a goal even though professional footballers exist. Amateur runners celebrate personal bests despite being nowhere near Olympic times. We can find meaning in personal growth regardless of absolute performance.

However, unlike exceptional humans, AI would be universally available - it's one thing to know Picasso exists, it's another for someone else to create a better painting than you with a single prompt. When excellence becomes trivially accessible, does striving still feel meaningful?

We also understand very little about what actually makes achievements feel meaningful.1 Is it personal growth? Social recognition? The ratio of effort to reward? Without understanding these mechanics better, we can't confidently say achievement would provide sustainable meaning in an AI world.

Again - this feels like it might be a part of the solution. But that doesn't mean it's solved.

Can’t we get AI to solve this problem?

AI could be really helpful here, and I think this is by far the strongest argument against working on this now. But there are two key challenges with waiting to solve this:

First, there's a question about authenticity. If AI designs perfect challenges, communities, and meaning-making activities - calibrated exactly to our psychology - would that feel meaningful? There might be a fundamental difference between discovering meaning and having it manufactured for us. ‘Meaning’ also seems difficult to specify, with wireheading/TikTok-brainrot style concerns if misalignment is not fully solved.

Second, by the time we have advanced AI, we might have locked in problematic patterns. If we've restructured society around AI efficiency, automated away most human activities, and accustomed ourselves to AI-dependent living, the space for meaningful human purpose might be permanently constrained. Redesigning society after it's built might be technically possible, but seems vastly harder than planning for it from the start.

Conclusion

We know very little about what gives people purpose today. We know even less about what would give the entire population sustainable long-term meaning in a world that looks very different to today’s.

We can't assume superintelligent AI will solve this for us. By the time we have such AI, we may have already made crucial societal choices that shape or limit the space for human purpose. Understanding this challenge better - before we find ourselves facing it - seems prudent.

Appendix: Where did this article come from?

I wrote most of this as part of an upcoming article about problems with advanced AI. When editing, I realised this wasn’t crucial to that article so decided to cut it (especially because I think the argument that we can deal with this after we have advanced AI in a stable world is fairly strong).

However, I realised it would be useful to have this to send on to people occasionally. I think it is a bit weird in isolation, but it still kinda works :)

Footnotes

  1. I am not a psychology expert. However, from discussions with people who are more clued up, I understand there is debate about what motivates humans reliably under different settings, particularly in the long term.

    Additionally, frameworks that do exist don’t give obvious answers here. Self-determination theory suggests autonomy, competence and relatedness drive motivation, which is linked to feeling purposeful - but whether these would be satisfied by achievements in an AI world is pretty unclear to me:

    • Would autonomy feel genuine if humans are coddled by AI assistance?
    • Do people feel competent when AI can always do things better so effortlessly?
    • Does social relatedness change when interactions are mediated by AI, or are just primarily with AI?

    It feels like these might be answerable questions - which makes me excited about the tractability of doing work in this space. It might be the case that we already have the answers: but they haven’t been compiled clearly with the framing of advanced AI.