New Open source packages now available on PyPI & npm Read more →

Mission

As AI agents become autonomous — reasoning, acting, coordinating — the question shifts from "what can AI do?" to "can we trust what it does?"

We publish open research and build open-source tools for AI trust, reliability, and verification. Our work stands on its own merits — reproducible methods, public code, and rigorous evaluation.

What We Do

Research

We publish on AI reliability, evaluation methods, and verification. Our work addresses the hard problems that emerge as AI systems become more autonomous — sandbagging detection, adversarial evaluation, reasoning verification, and trust dynamics in multi-agent systems.

Open Source

We build and maintain libraries for trust-aware AI systems — detection tools, verification methods, and evaluation infrastructure. All packages are published on PyPI and npm, AGPL-3.0 licensed.

Research Approach

Our research is guided by a few principles:

Reproducibility First

Every claim is backed by code. Our open-source packages implement the methods described in our papers. If you can't reproduce it, it doesn't count.

Adversarial Thinking

We assume AI systems will be deployed in adversarial conditions — gamed benchmarks, strategic underperformance, coordinated deception. Our methods are designed to hold up under these assumptions.

Statistical Rigor

LLM evaluation is plagued by noisy benchmarks and overfit leaderboards. We develop evaluation methods grounded in statistical testing, distributional analysis, and formal verification where possible.

Open by Default

Research is published openly. Code is open-source. We believe trust research must itself be trustworthy — and that starts with transparency.

Team

Rotalabs was founded by engineers and researchers with backgrounds in large-scale distributed systems, cloud infrastructure, and AI/ML — with prior experience at Google and in production ML systems serving hundreds of millions of users.

Our team combines deep systems engineering expertise with applied AI research. We've built and operated infrastructure at hyperscale, and we bring that same rigor to the problem of AI reliability and trust.

We are currently in stealth and will share more about the team as we move into our next phase. In the meantime, our research and open-source work speak for themselves.

Company Structure

Rota, Inc. is a Delaware C-Corporation headquartered in San Francisco.

Research and products are organized across three divisions:

Rotalabs

Open research on AI trust, reliability, and verification

Rotascale

Enterprise AI infrastructure

Rotastellar

Aerospace and space AI systems

Our research is open. Applied work builds on published methods.

Open Source

All packages are publicly available and independently verifiable:

Package Purpose Links
rotalabs-eval LLM evaluation with statistical rigor PyPI · npm · GitHub
rotalabs-verify Verification framework for AI-generated content PyPI · npm · GitHub
rotalabs-probe Detection tools for AI safety and sandbagging PyPI · npm · GitHub
rotalabs-steer Steering vectors for runtime behavior control PyPI · npm · GitHub
rotalabs-redqueen Adversarial testing for AI systems PyPI · npm · GitHub
rotalabs-cascade Trust-based routing and decision cascades PyPI · npm · GitHub

Join Us

We're looking for people excited about AI reliability, evaluation, and systems engineering. If you want to work on hard problems in the open, we'd like to hear from you.

Contact

Location

2261 Market Street STE 22728
San Francisco, CA 94114