Senior Technical Research Manager

Help shape the future of AI safety research in Cambridge

We are no longer accepting applications for this role. If you’re interested in working with us, please fill out our Expression of Interest form.

We're looking for a Senior Technical Research Manager to lead research support across our programs. This is a high-trust, technically-grounded role: you'll work directly with researchers to scope projects, refine research questions, review progress, and provide mentorship.

You'll need enough technical depth to engage credibly with alignment research and enough judgment to identify promising directions.

About Meridian

Meridian is a research hub and community for AI safety researchers based in Cambridge, UK. We host visiting researchers from institutions including UK AISI, think tanks, labs, and top universities; run structured research programs for early- and mid-career researchers; and convene the local AI safety community through workshops, reading groups, and events.

Our programs — including MARS (Mentorship for Alignment Research Students) and our Visiting Researchers Programme — have supported over 300 researchers and contributed to NeurIPS, ICML, ICLR, and the UK government AI agenda.

What you’ll do

Research mentorship and management

  • Scope and review research projects for MARS fellows and visiting researchers

  • Provide technical feedback on proposals, drafts, and research plans

  • Run regular check-ins with researchers; identify and unblock obstacles

  • Help early-career researchers develop taste for what questions matter and why

Community and convening

  • Run lab meetings, journal clubs, and workshops

  • Stay plugged into the broader AI safety research landscape

  • Connect researchers with relevant people, papers, and opportunities

Program development

  • Help shape how MARS and VRP evolve over time

  • Contribute to selection decisions for fellows and visitors

  • Work with leadership on research strategy

What we’re looking for

We're looking for someone with:

  • A PhD, or equivalent research experience, in AI safety, machine learning, or a relevant technical field

  • Experience mentoring or supervising researchers

  • Strong familiarity with the AI safety research landscape: current debates, agendas, who's working on what

  • The judgment to distinguish promising research directions from dead ends

Strong candidates may also have:

  • Publications in AI safety or ML (especially first-author work)

  • Experience managing research programs or collaborations across multiple institutions

  • A network in the AI safety community

Additional Details

Salary: £65,000–80,000, depending on experience

Benefits:

  • 28 days paid leave (including public holidays)

  • Pension scheme with employer matching up to 4%

  • £1,000 annual professional development fund

  • Conference travel budget

  • Tech stipend

  • Daily lunch and snacks

  • Visa sponsorship available

Logistics

  • Location: In-person in Cambridge, UK

  • Contract: Full-time (40 hours/week), initial 1-year contract, renewable annually

  • Start date: Flexible; from March 2026


What a week might look like

Monday: Check-ins with the current MARS cohort — one fellow is stuck on how to frame their interpretability project; another needs help narrowing scope. Review weekly updates from visiting researchers. Skim a new paper on activation steering that's getting attention.

Tuesday: 1:1s with two VRP researchers. One is making good progress; the other is going down a rabbit hole that probably won't pan out — you help them see it before they've sunk another month. Office hours in the afternoon; a few people drop by with half-formed ideas they want to stress-test.

Wednesday: Run the weekly lab meeting — this week it's a visiting researcher from MATS presenting early results. Afterwards, a call with a mentor at Anthropic about a fellow who might be a good fit for their team.

Thursday: Deep work day. You're reviewing three research proposals for the next MARS cohort and giving detailed written feedback on a draft that's almost ready for submission. Block out time to read a paper on sleeper agents.

Friday: Journal club on a recent ICML paper. Grab coffee with a biosecurity researcher who's thinking about pivoting to AI safety and wants your read on the landscape. Wrap up with the leadership team to talk about VRP strategy.

How to apply

Submit an application through the form below. Successful applicants will be invited to interview with the team and complete a work test.

If you’re excited by this role but unsure whether you’re a perfect fit, we encourage you to apply anyway — we’d love to hear from you.

If you have additional questions about the role, email Hannes Whittingham (hannes@meridiancambridge.org)