Research Labs at Meridian

Our labs are long-term, grant‑funded teams that blend theoretical rigour with engineering speed. Based in‑house at Meridian, they share workspace, compute and research‑manager support, and welcome external collaborators and visiting scholars. Each lab sets annual research goals and publishes its milestones publicly.

  • MATHEMATICS OF SAFE AI LAB

    Launch window  Q3 2025 · Cambridge, UK

    Mission
    Develop algebraic and probabilistic methods that yield machine‑checkable safety guarantees for frontier AI models, bridging formal proofs and empirical stress‑tests.

    Research themes:

    Compositional safety proofs – verify that safe sub‑modules remain safe when integrated.

    Approximate correctness bounds – derive probabilistic guarantees when exact proofs are intractable.

    Verification‑aware training loops – bake formal obligations into optimisation.

    Benchmark suite – public tasks for evaluating proof‑guided robustness.

  • CHAIN‑OF‑THOUGHT ALIGNMENT LAB

    Launch window  Late 2025 · Cambridge / Hybrid

    Mission
    Scale interpretability techniques to trace, evaluate and steer multi‑step reasoning in large language models, enabling verifiable chain‑of‑thought alignment.

    Research themes:

    Path tracing – attribute outputs to intermediate reasoning tokens.

    Intervention techniques – edit chains to prevent reasoning drift.

    Faithfulness metrics – quantify alignment between model rationale and ground truth.

    Human‑in‑the‑loop tooling – fast visual debuggers for alignment researchers.

  • Questions?

    Email info@meridiancambridge.org and we’ll connect you to the relevant PI.