Meridian Visiting
Researcher Programme
Where safety research finds its community.
We're looking for researchers passionate about AI safety to join us in Cambridge. Whether you're a PhD student, postdoc, or independent researcher, our visiting programme gives you the space to work on your own projects while connecting with others tackling similar challenges.
What's the Visiting Research Programme?
The Programme brings AI Safety researchers to Meridian in Cambridge for a period of several months. We've found that good ideas happen when smart people share the same space - even if they're working on different things.
We're pretty flexible about who can apply. The main thing is that you're working on something related to making AI systems safer or more beneficial. Not sure if your work fits? We encourage you to apply anyway.
Most researchers join us at the start of terms (October, January, or April), but we can be flexible if those dates don't work for you.
What you’ll get:
A desk in our Cambridge workspace, open 24/7
Help finding somewhere to live
Travel support to get to Cambridge
Access to our research community
Regular talks, workshops and socials
A chance to participate in our Intensive Weeks (more on that below)
The details
Who should apply:
Researchers with expertise in machine learning, computer science, mathematics, or related fields
Graduates of programmes such as MATS, MARS, ARENA, ML4G, SPAR, and ERA with AI safety knowledge looking to transition to full-time research
Individuals with established research portfolios interested in transitioning to AI safety
PhD candidates, recent graduates, or postdoctoral researchers exploring AI safety directions
PIs interested in incorporating AI safety into their research agenda
People based in Cambridge or willing/excited to work there and contribute to strengthening Cambridge as a hub for AI safety research
Support included:
Partial or full expense reimbursement to and from Cambridge, UK. (Exact amount will depend on individual circumstances and available funding.)
Accommodation during the programme week
Workspace and resources
Meals during program activities
Apply by March 23, 2025, anywhere on Earth.
Spaces are limited to approximately 30 participants.
Programme Structure
-
Day 1-2: Ideation and team formation
Explore research directions and meet potential collaborators through introductory sessions, workshops, and our speed cofounder matching process. You'll connect with researchers whose skills complement yours and begin developing project ideas together. -
Day 3: Team Cementation
Form your research team and refine your project focus. By the end of day three, you'll have a clear concept and research direction with your newly formed team. -
Days 4-5: Proposal Development
Learn effective grant writing strategies from experts and develop your proposal with structured feedback. Sessions will cover research impact assessment and alignment with funder priorities. -
Day 6: Red Teaming and Feedback
Present your proposal to peers and experts who will identify potential weaknesses. Use this critical feedback to strengthen your proposal's technical content and presentation. -
Day 7: Proposal Finalization
Polish your submission with final technical reviews. By day's end, you'll have a completed proposal ready for the April 15th deadline.
FAQs
-
The programme is designed to fill a gap between upskilling programs and full-time research positions for those transitioning to the AI Safety field. It provides an institutional home and supportive environment for researchers and aims to help researchers in developing compelling research proposals for further funding.
-
Unlike typical upskilling programs, this program specifically targets researchers who already have AI safety knowledge but need help transitioning to full-time research roles. It focuses on team formation, project development, and grant writing rather than teaching technical skills.
-
Collaborations are an extremely effective learning mechanism. Researchers who might otherwise work independently can benefit significantly from working with others they can learn from and get rapid feedback/coaching. Research teams typically write much better project proposals than individuals working alone.
-
Selection is based on a review of CVs and prior research experience, evaluation of preliminary project proposals, and assessment of candidates' motivations for entering the AI safety field. Selection decisions will be made on a rolling basis and we encourage applicants to apply as early as possible.
-
Selection decisions are made on a rolling basis. We encourage those who will need more time to make travel arrangements to apply earlier. We aim to notify all applicants by March 24th at the latest.
-
The program will reimburse some or all travel expenses, provide accommodation for the duration of the programme week, and cover meals and refreshments during program activities
-
You can apply either as an individual or with a pre-formed team. For individuals, the programme encourages team formation during the in-person week, with ideal research teams consisting of 2-4 members. If you apply as an individual, you’ll have opportunities to connect with other participants and form a team through our structured matching process. You are also welcome to submit your final proposal individually if you do not find a suitable team.
-
After the in-person week, teams will submit their proposals to Open Philanthropy by the April 15th deadline. Funding decisions are expected in May-June 2025, with funded projects commencing from June 2025 onwards.
-
In the case that grants submitted to Open Philanthropy are unsuccessful, we will be happy to continue working with teams to refine their proposals and submit to alternative funding sources.
-
Funded teams are encouraged to return to Cambridge to conduct their funded research at the Meridian office and CAISH facilities. We will assist with visas and relocation for the research period.
-
Open Philanthropy has identified 21 research areas of interest across five clusters: Adversarial machine learning, Exploring sophisticated misbehavior in LLMs, Model transparency, Trust from first principles, and Alternative approaches to mitigating AI risks. Their RFP provides detailed descriptions of these areas.
-
Funded teams will have access to dedicated workspace at the Meridian office, integration with Cambridge's AI safety research community, optional research management support, and administrative assistance for grant management.
-
If you have further questions, please contact Gábor Fuisz at gabor@cambridgeaisafety.org