FAR.AI is seeking a
Research Lead to develop and lead a research agenda to reduce catastrophic risks from advanced AI. You'll build and lead a team executing this agenda - setting research direction, mentoring Members of Technical Staff to scale your vision, and staying close enough to the work to write code and run experiments yourself when it matters. The aim is research that changes how AI labs and governments behave, not just research that gets published. This role is a strong fit if you want to work in an impact-driven environment with high autonomy, pursuing empirically-grounded, scalable ML safety work.
About the RoleResearch Leads define and own a research workstream end-to-end. Day-to-day, that means:
- Articulate a research agenda with a clear theory of change for mitigating catastrophic risks from human-level or superhuman AI systems, and/or vastly increasing the upside of such systems.
- Grow and lead a team of technical staff in pursuit of this agenda, either directly or in partnership with an engineering co-lead.
- Lead novel research projects where there may be unclear markers of progress or success.
- Share your research findings through written content (e.g. academic publications, blog posts) and presentations (e.g. ML conferences, policymaker briefings) to drive adoption and change.
- Mentor and coach junior team members in research skills and ML engineering.
- Contribute to the FAR.AI intellectual environment, for example by giving feedback on early-stage proposals.
This role would be a
great fit if you:
- Want to work on the most impactful research directions, alongside mission-driven colleagues who'll push them forward with you.
- Wish to pursue empirically grounded, scalable research directions that lean, technically strong teams can drive forward.
- Value the ability to speak freely. We don't censor our researchers - we just ask that you protect confidential information and make clear when you're speaking personally or on behalf of the organization.
- Want to advise and collaborate with governments, leading AI companies, and academics. We're a small organization that punches above its weight by working closely with these partners - through red-teaming, technical standards work, and research collaborations.
This role would be a
poor fit if you:
- Prefer solo IC research to leading a team toward a shared agenda. Some people can do great research that way, but in this role we're looking for someone whose research direction is strong enough that other excellent researchers want to build it with them.
- Prioritize novelty and intellectual elegance over impact. We care about both - a mathematically elegant solution to AI safety would be wonderful - but when we have to choose, we choose what makes AI safer in practice.
- Can only work with the largest compute clusters available at industry labs or need to be compensated with equity in a rapidly growing startup. We offer competitive salaries and sizable compute budgets on a cluster that we manage, but if you value these things over having a positive impact on the future, then you may be more suited to a for-profit lab.
About YouTo be a strong candidate for the Research Lead role, you likely:
- Have a strong existing research track record in AI or another highly technical subject (e.g. CS, math, physics).
- Have a clear view of which safety research directions are likely to matter most over the next few years, and why.
- Have either (a) a clear research agenda you'd pursue at FAR.AI, with a theory of change explaining why it's valuable, or (b) a strong track record and a research space you'd sharpen into an agenda over your first months. We assess both paths against the same bar - depth of articulation at application is itself a signal about expected runway.
- Have led a team, mentored graduate students, or supported early-career researchers through fellowship programs. Informal leadership in flatter organizations counts - we look at substance, not titles.
- Can effectively communicate novel methods and solutions to both technical and non-technical audiences.
- Hold a PhD or have 2+ years research experience in computer science, artificial intelligence, machine learning, or statistics.
It is preferable if you:
- Have an established publication record in AI safety.
- Are comfortable writing grant proposals and navigating collaborations with other organizations or external research groups.
If you are missing key leadership experience or are earlier in your career, we encourage you to consider the open Research Scientist pathway and invite you to contribute to one of our existing agendas.
LogisticsIf based in the USA or Singapore, you will be an employee of FAR.AI (501(c)(3) research non-profit / non-profit CLG). Outside the USA or Singapore, you will be employed via an EOR organisation on behalf of FAR.AI or as a contractor.
- Location: Both remote and in-person (Berkeley, CA or Singapore) are possible. We sponsor visas for in-person employees, and can hire remotely in most countries.
- Hours: Full-time (40 hours/week).
- Compensation: $170,000-$250,000/year depending on experience and location, with the potential for additional compensation for exceptional candidates. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.
- Application materials: Expect ~1-2 hours of preparation; most carries forward from prior job searches. We ask for a CV, a short research direction statement (the form supports both fully-formed agendas and developing ones), 2-3 selected works with a brief note on your personal contribution, and a short note on why FAR.AI is a good home for your direction. If you advance to portfolio review, we'll ask for a full research direction statement (1-2 pages, with a theory of change to real-world implementation; ~1.5-2 hours, due within about a week).
- Process: From application: a portfolio review (async), a 60-minute bilateral fit call, a research deep-day (~3.5 hours live, including an open talk to FAR research staff and two interview sessions), a 5-day paid work trial, structured reference calls, and a final decision panel. Typical elapsed time: 4-6 weeks. Total candidate time end-to-end is ~50 hours, with the paid work trial being the bulk. If a 5-day block isn't feasible for you, reach out - we can discuss alternatives.
If you have any questions about the role, please do get in touch at talent@far.ai.