Meridian Visiting Researcher Programme

Accelerate your AI safety and governance research in Cambridge

Applications open Spring 2025. Sign up to our mailing list to stay informed about when applications open.

The Meridian Visiting Researchers Programme brings talented researchers to Cambridge for 3-6 month residencies focused on AI safety and governance. We provide workspace, community, and collaborative opportunities for researchers working on technical alignment, governance, interpretability, forecasting, and other research ensuring AI benefits humanity.

  • Two men working on computers with code on their screens in a casual office setting.

    Accelerate your research

    Meridian provides an energetic and fast-paced environment for you to complete your research. Many visitors tell us they accomplish more here in a few months than in a year working alone.

  • Historic street with medieval architecture, stone buildings, church spire, parked bicycles and cars, trees, blue sky with clouds.

    Plug into Cambridge's AI safety scene

    Cambridge has become a hotspot for AI safety work in Europe. Being here will give you the opportunity to engage with a large community of researchers with a wide range of expertise.

  • Group of people working on laptops around a wooden table in a cabin, with a chandelier above.

    Structure with flexibility

    We provide enough structure (seminars, reading groups) to create momentum, while leaving plenty of space for deep work and independent research.

  • Two people having a conversation in a kitchen or break room. They are each holding a mug. Shelves behind them hold snacks, drinks, and kitchen items, adding to a casual atmosphere.

    Find collaborators

    People discover colleagues with complementary skills here. These connections frequently develop into ongoing collaborations that continue long after their time at Meridian.

  • A person speaking in front of a presentation about AgentHarm baseline. The individual is gesturing with their hands and standing near a poster with the text 'Cambridge Safety H.' The presentation slide includes diagrams and text related to computational tools.

    Research guidance

    Receive guidance and advice from experienced researchers within our community, as well as dedicated research support to ensure your projects remain on track.

Person working on a computer with a dual-monitor setup, writing code, with a laptop, keyboard, green water bottle, and bananas on the desk.

What's Included

  • Dedicated workspace in our Cambridge office with 24/7 access

  • Travel support to and from Cambridge

  • Research community access with regular seminars and events

  • Networking opportunities with other AI safety researchers

  • Overnight workshops and retreats for meeting other researchers and developing your projects

  • Flexible scheduling options (3-6 month stays, and beyond)

  • Visa support to live in the UK and conduct your research here

Who should apply?

  • Researchers with looking to pivot into AI safety, security or governance

  • Existing AI safety, security, or governance researchers looking to build their network and portfolio

  • Graduates of programmes such as MATS, MARS, ARENA, ML4G, SPAR, and ERA with AI safety knowledge looking to transition to full-time research

  • PhD candidates, recent graduates, or postdoctoral researchers exploring AI safety directions

  • PIs interested in incorporating AI safety into their research agenda

  • People based in Cambridge or willing/excited to work there and contribute to strengthening Cambridge as a hub for AI safety research

A person with a backpack walking on a path near an old, historic stone building, surrounded by lush green trees and manicured lawns.

Research Areas

Meridian's Visiting Researchers Programme welcomes researchers working across a broad range of AI safety, security, and governance topics. Our priority research areas include:

Technical Safety Research

  • Evaluating AI capabilities: Developing rigorous methods to assess the capabilities of advanced AI systems

  • AI interpretability and transparency: Making AI systems more understandable to humans

  • Model organisms of misaligned AI: Creating controlled examples of misalignment to study safety properties

  • Information security for safety-critical AI systems: Securing AI systems against threats and vulnerabilities

  • AI control and control evaluation: Designing and testing mechanisms for maintaining human control over AI

  • Making AI systems adversarially robust: Ensuring AI systems remain reliable under adversarial conditions

  • Scalable oversight: Developing methods to effectively supervise increasingly capable AI systems

  • Understanding cooperation between AI systems: Studying multi-agent dynamics and cooperation mechanisms

Forecasting and Modeling

  • Economic modeling of AI impact: Analyzing how AI development will affect economic systems

  • Forecasting AI capabilities and impacts: Predicting the trajectory and consequences of AI development

  • Identifying concrete paths to AI takeover: Mapping potential failure modes and their mitigations

Governance and Policy

  • AI lab governance: Developing responsible practices for AI research organizations

  • UK/US AI policy: Creating national frameworks for AI development and deployment

  • International AI governance: Building coordination mechanisms across national boundaries

  • Legal frameworks for AI: Addressing liability, regulation, and rights issues

  • Developing technology for AI governance: Building tools to support effective AI governance

Ethics and Values

  • AI welfare: Considering the moral status of artificial systems and ethical obligations toward them

  • Value alignment: Ensuring AI systems act in accordance with human values and intentions

  • Societal impacts of transformative AI: Analyzing broader implications for society and human welfare


This list is not exhaustive, and we welcome applications from researchers working on related areas not explicitly listed above.

 FAQs

  • We created this programme because good research doesn't happen in isolation. By bringing together people working on different aspects of AI safety in one physical space, we've seen how chance conversations lead to better research. The programme is about creating a supportive social and research environment rather than upskilling or directing what researchers work on.

  • The Visiting Researchers Programme is specifically designed for researchers who want to continue their existing work while benefiting from our Cambridge community for 3-6 months. Unlike longer-term positions or fellowships, this programme focuses on providing space, light-touch research support, and connections rather than intensive research supervision or funding.

  • We welcome applications from researchers at all career stages working on AI safety-related topics. This includes PhD students, postdoctoral researchers, independent researchers, industry professionals, and academic faculty. Although we have no specific requirements, we generally expect that researchers have funding for their research and are sufficiently experienced so as to be able to perform independent research with light-touch support.

  • While most of the time will be flexible, we do have some baseline expectations:

    • Active participation in the community (attending regular events, engaging with other researchers)

    • Regular updates on your research progress

    • A final deliverable at the end of your visit, typically in the form of a written research summary, blog post, paper draft, or presentation to the Meridian community

    The specific format of your final deliverable can be tailored to your research style and project, but we expect all visiting researchers to produce something tangible by the end of the programme.

  • There's no standard schedule, as researchers manage their own time. Many researchers arrive mid-morning, work on their projects, participate in scheduled events like seminars or reading groups, have informal discussions over lunch or coffee, and continue working through the afternoon.

  • Most researchers stay between 3 and 6 months. We generally prefer visits of at least 3 months to allow enough time for integration into our community and research progress. We are open to residencies longer than 6 months; please indicate your interest on the application. We understand that researchers often have other commitments and can work with you to find an arrangement that suits your schedule.

  • Absolutely! We don't require researchers to start new projects for their visit. If you're already into a research project when you apply or join us, you're welcome to continue that work at Meridian; we're interested in supporting promising research at various stages of development.

  • We do not provide direct research funding for visiting researchers. We expect most researchers will have secured external funding from organizations like Open Philanthropy, ARIA, LTFF, or other funding bodies before joining the programme. What we do provide is:

    • Travel assistance to and from Cambridge

    • Help finding suitable accommodation

    • Workspace and research facilities

    • Community access and networking opportunities

    In exceptional cases, we may be able to help identify potential funding sources, but this is not guaranteed. If you haven’t yet secured funding, you may be interested in our Research Accelerator Programme.

  • We use a rolling admissions process, reviewing applications as they come in rather than only at fixed deadlines. We typically review applications within 2-3 weeks of submission. If we think there might be a good fit, we'll schedule a brief interview to discuss your research and answer any questions.

  • No. While we want to understand what you plan to work on, we don't expect every applicant to have a completely developed research proposal. We welcome both new project ideas and continuations of existing work. Your application should clearly articulate your research questions, methodologies, and expected outcomes, but we understand that these may evolve during your time with us.

  • Strong applications typically demonstrate:

    • Clear research direction relevant to AI safety

    • Evidence of research ability and independent thinking

    • Thoughtfulness about how your time at Meridian would be valuable

    • Potential to both contribute to and benefit from our community

  • Absolutely. We welcome international researchers and can assist with visas and travel.

    Most international Visiting Researchers will need either a Standard Visitor visa or, for longer stays, a specific research visa. The appropriate visa category depends on your nationality, the duration of your stay, and the specific nature of your research activities.

    The Standard Visitor visa typically allows stays of up to 6 months. For academic visitors on sabbatical leave conducting research, stays of up to 12 months may be permitted.
    We will assist with visa costs and processing should you be accepted into the programme.

    Important Disclaimer: While we provide supporting documentation and guidance for visa applications, all visas are subject to approval by the UK government. Meridian cannot guarantee visa approval, and researchers are ultimately responsible for ensuring they have the appropriate immigration permission to enter the country and participate in the programme.

  • We understand that researchers often have other professional commitments. As long as these commitments don't significantly impact your ability to engage with the programme, we're happy to work with you to accommodate conferences, workshops, or other short-term obligations during your stay.

  • No. The programme is designed for in-person participation. We generally expect researchers to be physically present in Cambridge for the majority of their visit.


If your question isn't answered here, please email us at info@meridiancambridge.org.