Asking for help
You’re probably reading this because I’ve sent it to you after you’ve contacted me.
I am very happy to receive messages with clear, specific and relevant asks. Unfortunately, I didn’t think your message had such an ask.
An unclear ask:
I've been reading about recent advancements in AI and it's got me thinking. Given your background, I imagine you might have some valuable perspectives on this topic.
I'm curious about the implications of AI for our industry and society at large. There seems to be so much to consider. I've been trying to wrap my head around it all, but it's quite complex.
It’d be great to connect, and then maybe we can work on this more in future.
A good rule of thumb is that your ask is only relevant to your situation, and wouldn’t apply to 1,000+ people.
A non-specific ask:
Do you know of any good ways to get more familiar with AI? I'm not sure where to start or what exactly I'm looking for, but I feel like it's an important topic to understand better.
Any thoughts or suggestions you might have would be greatly appreciated. Thanks in advance for your help!
I don’t usually consider ‘Can we chat for 30 minutes’ on its own a specific ask. I’d prefer you lay out the questions you want answers to or areas you’re confused about (at least briefly) - then I’ll have more context as to whether I want to answer them, and I can choose between doing so in writing or on a call.
Also please don’t ask to ask:
I had some questions about AI safety. Would it be okay if I send them to you to look over?
Instead, just ask the questions from the outset.
A non-relevant ask:
I hope this email finds you well. I'm reaching out because I'm currently working on implementing direct preference optimization (DPO) for fine-tuning large language models, and I have a few specific technical questions I was hoping you could help clarify:
- In your experience, what's the optimal batch size when training with DPO? I'm particularly interested in how this might differ from standard supervised fine-tuning.
- How do you typically handle the reference model distribution in practice? Do you use a frozen copy of the initial model, or do you have strategies for updating it during training?
I appreciate any insights you can provide on these points. If it's easier to discuss this over a call, I'd be happy to schedule one at your convenience.
This would probably be a pretty good email to a DPO expert (provided answers to these questions aren’t easily discoverable online, and there’s sufficient context to answer them here). But I am not a DPO expert.
In general, I’d be more okay with receiving this kind of email: I’d just tell you I couldn’t answer these questions. Lean towards contacting me if you’re not sure about relevance, but do have a clear and specific ask.
Not every message needs to meet all three criteria, though it’s much more likely your message will get a substantive response (and a more useful response for you) if it does.
Please don’t take this article the wrong way! I worry it might come across that I’m unwilling to help people who are new to the field, or unsure what they’re doing. The opposite is true: I run the AI Safety Fundamentals courses because I believe it’s important to help grow the field. However, if I tried to answer every vague email I got I’d likely help fewer people, and help them less.
You might also find these articles useful:
General advice
If you contacted me for general advice on AI, see:
- Session 1 of the AISF alignment curriculum
- The AISF resources page, particularly the section on ‘Introductions to ML engineering’
If you contacted me for general advice on technical AI safety, see:
- The AISF alignment curriculum
- The AISF resources page, particularly the section on ‘Introductions to AI safety’
If you contacted me for general advice on AI governance, see: