About

About Rotalabs

An independent research initiative focused on reliable, verifiable AI systems.

Our Mission

Building trust in
autonomous AI

As AI agents become autonomous — reasoning, acting, coordinating — the question shifts from "what can AI do?" to "can we trust what it does?" We publish open research and build open-source tools for AI trust, reliability, and verification. Our work stands on its own merits — reproducible methods, public code, and rigorous evaluation.

2024

Founded

12

Open-Source Packages

3

Divisions

100%

Open Source

What We Do

Research & Open Source

We publish on AI reliability, evaluation methods, and verification — sandbagging detection, adversarial evaluation, reasoning verification, and trust dynamics in multi-agent systems.

We build and maintain libraries for trust-aware AI systems. All packages on PyPI and npm, AGPL-3.0 licensed.

12

Packages

9

Research Areas

2

Languages

Our Approach

Research Principles

Our research is guided by a few core principles that shape everything we build and publish.

Reproducibility First

Every claim is backed by code. Our open-source packages implement the methods described in our papers. If you can't reproduce it, it doesn't count.

Adversarial Thinking

We assume AI systems will be deployed in adversarial conditions — gamed benchmarks, strategic underperformance, coordinated deception. Our methods are designed to hold up under these assumptions.

Statistical Rigor

LLM evaluation is plagued by noisy benchmarks and overfit leaderboards. We develop evaluation methods grounded in statistical testing, distributional analysis, and formal verification where possible.

Open by Default

Research is published openly. Code is open-source. We believe trust research must itself be trustworthy — and that starts with transparency.

Our People

Team

Rotalabs was founded by engineers and researchers with backgrounds in large-scale distributed systems, cloud infrastructure, and AI/ML — with prior experience at Google and in production ML systems serving hundreds of millions of users.

Our team combines deep systems engineering expertise with applied AI research. We've built and operated infrastructure at hyperscale, and we bring that same rigor to the problem of AI reliability and trust.

We are currently in stealth and will share more about the team as we move into our next phase. In the meantime, our research and open-source work speak for themselves.

Structure

The Rota Ecosystem

Rota, Inc. is a Delaware C-Corporation headquartered in San Francisco. Research and products are organized across three divisions.

Our research is open. Applied work builds on published methods.

Careers

Join us

We're looking for people excited about AI reliability, evaluation, and systems engineering. If you want to work on hard problems in the open, we'd like to hear from you.

Research

AI Safety Researchers

Interpretability, adversarial ML, formal verification, multi-agent trust. Publish openly and build tools from your research.

Apply
Engineering

Systems Engineers

Build trust infrastructure at scale. Python, Rust, distributed systems, GPU kernels. Production ML experience preferred.

Apply
Open Source

Contributors

All our packages are open source. Issues, PRs, and discussions welcome across 12 repositories on GitHub.

GitHub
Get in Touch

Contact

Location

2261 Market Street STE 22728
San Francisco, CA 94114