Adam Jones's Blog
- What does Yann LeCun think about AGI? A summary of his talk, "Mathematical Obstacles on the Way to Human-Level AI" —
- Major UK banks are training their customers to fall for scams —
- Running LLMs Locally in 2025: Speed tests on M2 Pro + 16 GB RAM —
- A rough plan for AI alignment assuming short timelines —
- YouTube series: How to contribute to the BlueDot Impact repo External —
- How to set up PostHog for a Bubble single-page application, with proper pageview tracking —
- AI safety content you could create —
- Policymakers don't have access to paywalled articles —
- Alignment Is Not All You Need: Other Problems in AI Safety —
- The post-AGI purpose problem —
- Why product managers are uniquely suited for tech policy roles —
- The beginners guide to investing (2025 UK edition) —
- Setting up OpenWrt on the DSL-AC68U for 1 Gig speeds —
- Teach-swap-explain: a learning activity for course designers to create highly effective learning experiences External —
- Why we run our AI safety courses External —
- How Does AI Learn? A Beginner’s Guide with Examples External —
- The standard W3C Gamepad API mapping for an Xbox controller —
- What early career policymakers can learn from product managers: understanding people’s actual problems is key to effective policy —
- AI Alignment June 2024 course retrospective External —
- OpenAI’s cybersecurity is probably regulated by NIS Regulations —
- Does project proposal feedback result in better final projects? External —
- No time for user interviews? Learn how to use empathetic role-playing to make better product decisions. —
- An easy win for UK AI safety: competition law safe harbour —
- Modular AI Safety courses proposal External —
- Summary of AI alignment participant user interviews External —
- An easy win for UK AI safety: supporting whistleblowers —
- What we didn’t cover in our June 2024 AI Alignment course (or, an accessible list of more niche alignment research agendas) External —
- The AI regulator’s toolbox: A list of concrete AI governance practices —
- Are cheap shaver blades any good? —
- Advertising to technical people: LinkedIn, Twitter, Reddit and others compared External —
- What advertising creatives work for technical people? External —
- Results from testing ad adjustments External —
- Diagnosing infectious diseases with CRISPR: SHERLOCK and DETECTR explained —
- Reflections on my 7-day writing challenge —
- How to avoid the 2 mistakes behind 89% of rejected AI alignment applications External —
- What do applicants mean when they say they come from LinkedIn? External —
- Our 2023 internal cybersecurity course External —
- Addressing digital harms: a right of appeal is not sufficient —
- AI as a corporation (or, an intro to AI safety?) —
- How to fix proof of address —
- Proof of address is nonsense —
- Government departments should say they don't care —
- Avoiding unhelpful work as a new AI governance researcher —
- Asking me for help —
- Preventing overreliance: The case for deliberate AI errors in human-in-the-loop systems —
- Why having a human-in-the-loop doesn't solve everything —
- 7 blogs in 7 days —
- What we learnt from running our AI alignment course in March 2024 External —
- What is a lead cohort? External —
- What we changed for the June 2024 AI alignment course External —
- A thing I'd like to exist: benchmarks for train internet —
- 3 articles on AI safety we’d like to exist External —
- Why we work in public at BlueDot Impact External —
- Why are people building AI systems? External —
- How to send Keycloak emails through Google Workspace's SMTP relay —
- Preventing AI oligarchies: important, neglected and tractable work —
- How might AI-enabled oligarchies arise? —
- Follow-up: benchmarking Next.js server vs nginx at serving a static site, now on AWS —
- Benchmarking the Next.js server vs nginx at serving a static site —
- No, I don’t want to fill out your contact form —
- AI alignment project ideas External —
- How to avoid the 4 mistakes behind 92% of rejected AI governance applications External —
- Do cheap GPS trackers work? A review of the GF-07, GF-09 and GF-22. —
- Can we scale human feedback for complex AI tasks? An intro to scalable oversight. External —
- ai-safety.txt: Making AI vulnerability reporting easy —
- What's the best Myprotein flavour? I tried 23 of them to find out. —
- OHGOOD: A coordination body for compute governance —
- What is AI alignment? External —
- What risks does AI pose? External —
- How are AI companies doing with their voluntary commitments on vulnerability reporting? —