Working at Anthropic FAQs

Headshot of Adam Jones

Adam Jones

People often ask me for some time to chat about working at Anthropic. Usually what they also want to know is "how do I get a job there?" or "can you refer me?". This is a brief reference of my standard answers :)

If I've linked you here directly, no offence intended - this is probably a more thoughtful answer than I'd manage in a quick reply. If you still have questions that aren't covered below, you're welcome to reach out via the email on my homepage.1

Why do I work at Anthropic?

Personally, I'm very concerned that powerful AI systems could pose serious challenges in the next couple of years: If many people control powerful AI this invites potentially catastrophic misuse (e.g. bioterrorism) - but if few people control them we might end up in a stable totalitarian state. And that's assuming humans do manage to control them. Having worked in both the non-profit and government sectors, I think working at an AI company is the best place for me to make things go well - we simply have so little time, and most external governance and other solutions are just moving too slow.

Anthropic's mission is to ensure the world safely makes the transition through transformative AI - particularly trying to avoid or mitigate large scale societal and existential risks including those I've listed above. The post Core Views on AI safety and Dario's essay The Adolescence of Technology also go into more detail as to our mission, shared beliefs and reasoning behind them.

Some companies have mission statements that sound nice, but the company doesn't really live by the mission when push comes to shove. But at Anthropic (at least at time of writing), I've seen the mission being the core consideration for important decisions, and it's clear that senior leadership care very much about this. Recent events mean the public have seen some costly decisions we've made in service of the mission, which hopefully provides some evidence for this claim.

Joe Carlsmith has also written well about why he found the case for going to Anthropic compelling, and much of it resonates with me. It also talks about whether working at a frontier AI company is net positive for the mission, which I think is not obvious and depends on your beliefs about what will work.

What is the day-to-day role like?

You work with very smart people who genuinely care about the mission, on problems that are important and interesting. There is often a surprising amount of low hanging fruit - part of this is that things are changing extremely rapidly so we're often exposing lots more surface area for easy wins, rather than having to grind on hard problems. Because of this, nobody needs to fight over territory, avoiding a lot of the annoying political fights that plague other organisations.

Roles and important areas change quite rapidly as AI capabilities advance. Increasingly the job involves managing a bunch of Claudes and focusing on the bits Claude is bad at. Jack Clark, an Anthropic co-founder, describes this a bit in 'My agents are working. Are yours?'. There's something Jevons paradox-y about it - until they start doing everything, better AI doesn't mean less work, it means the work changes and there's more of it.

Compared to other AI companies, Anthropic is much more engineering-focused. There is both a lot of science and a lot of engineering, but researchers are expected to write2 production code. The culture is very empirical, data-driven, and collaborative, which I think relates to one of Anthropic's core values: do the simple thing that works.

This said, having so much low hanging fruit and finite resources means there are genuinely hard prioritisation decisions about how to allocate people's time, compute, and other resources. Combined with the weight of the mission on your shoulders, everyone ends up working incredibly hard and is pretty stretched all the time - I also certainly feel the (self-imposed) pressure of not being able to get to important tasks. People generally work long hours, and folks outside San Francisco often shift their day a bit to have more overlap (e.g. many London-office folk that collaborate with people in SF get in and out of work quite late).

Beyond the day-to-day factors, life as an Anthropic employee might get pretty weird as AI advances. It seems fairly plausible that Anthropic becomes broadly hated over job loss in the near future. Anthropic might also be nationalised as AI gets more powerful, or Anthropic might take certain actions in service of the mission - either of which could mean equity that looks valuable now could end up worthless. Finally, Anthropic is obviously at the forefront of AI adoption, so when your role is automatable by AI, Anthropic is one of the places that may do so the soonest.

Am I good enough to apply to Anthropic?

If the above sounds exciting to you and there is a role that you think maybe matches your skills, yes!

Per the law of equal and opposite advice:

  • the most diligent people are far too cautious and don't apply when they should e.g. they think 'I'm not smart enough to get into a place like Anthropic'
  • many people who do apply are not diligent enough e.g. 'I used to work at [org], I'm sure they'll take me. I'll put in a generic CV and only half-answer the application questions'

I don't know how much saying this will help, but like... do the opposite thing?

How do I get a job at Anthropic?

Apply via the careers page directly.

Putting in a strong application

One of the best things to ask yourself (really for any communication, but especially when applying for things) is:

If I was on the other side evaluating my application, what would help me make a well-informed decision?

This usually means thinking hard about what skills a hiring manager would be looking for, and presenting your ability on these in a way that the hiring manager can understand and ideally verify.

Referrals

Always apply via the careers page, as this is needed whether you have a referral or not - and it's better to get your application in on time!

I can refer you if we've worked closely together e.g. on the same team for 1 month+ full time, or I know you well from my personal life and can talk to your character e.g. we've been friends for 1 year+. (These are rough guidelines). If you meet this, please do ask me for a referral directly - I want people I know are great to join and help us!

Having overlapped at the same company or meeting a few times generally isn't enough to meet this bar.

Interviews

If you're advanced past the initial screening your recruiter should explain what interviews you'll have and how to prepare for them.

Upskilling

Beyond just presenting yourself well, you can also make yourself a genuinely stronger candidate! I recommend:

  • Upskilling in AI safety. I'd strongly recommend the courses at BlueDot Impact (where I used to work) - they're a great way to build technical depth on AI safety topics and connect with others working in the space. Many BlueDot graduates have gone on to work at Anthropic. You can also find a wide range of self-learning materials online, maybe starting with the linked articles above.
  • Become a solver of problems. A quality that is valuable in many roles at Anthropic is being highly agentic: being able to overcome obstacles and solve your own problems. It's a bit cringe, but this points at what I mean. One of the best ways to become more high agency is to work on your 'bias to action': pick a problem in the world, and do something about it.
  • Use AI tools. This pairs nicely with the last one, but a good way to get a greater feel for AI and prepare for being effective at Anthropic is by getting to know AI tools. Set up Claude Code and build something cool this weekend!

Footnotes

  1. Ideally after having spent a few minutes trying to find the answer yourself!

    This should include asking Claude with web search. I receive a fair few messages that I end up just getting my Claudes to respond to - you'll get a faster and better answer by skipping the middleman.

  2. Well, get Claude to write.