Trust Intelligence Research

Building the science of reliable AI

Memory systems, agent reliability, adversarial robustness. Open research and open source for trust decisions in AI systems.

What We Study

Research Areas

Memory Systems

Field-theoretic memory architectures treating stored information as continuous fields governed by PDEs. Enabling persistent, coherent memory for AI agents.

Agent Reliability

Detection tools for strategic underperformance in AI systems. Activation probes, behavioral analysis, and sandbagging detection with 90-96% accuracy.

Adversarial Robustness

Evolutionary adversarial testing with quality-diversity optimization. Attack taxonomies, red-teaming frameworks, and multi-provider LLM targets.

Publications

Recent Papers

2026

Field-Theoretic Memory for AI Agents

Mitra et al.

Memory architecture treating stored information as continuous fields governed by PDEs.

2026

Verity: Neuro-Symbolic Synthesis for Verified Code

Mitra et al.

CE2P translates formal verification failures into structured LLM feedback.

2026

Adversarial Testing for AI Systems

Mitra et al.

Evolutionary approaches to adversarial testing with competitive attack-defense pressure.

From the Blog

Latest Writing

March 01, 2026

Field-Theoretic Memory for AI Agents

We treat agent memory as continuous fields governed by partial differential equations instead of discrete database entries. The result: +116% F1 on multi-session reasoning and...

Read more
February 18, 2026

Shared Context for AI Agents: Introducing rotalabs-context

AI agents that can't share what they know make the same mistakes independently. We're releasing rotalabs-context - a context intelligence engine for ingesting, searching, and...

Read more
February 07, 2026

Statistical Rigor in LLM Evaluation

Why most LLM benchmarks are doing evaluation wrong, and how to fix it with confidence intervals, significance tests, and effect sizes.

Read more