New Open source packages now available on PyPI & npm Read more →

Research Areas

Nine focus areas spanning detection, verification, trust, and physical AI.

Papers Q1 2026

Agent Reliability

Detection and verification for autonomous AI. Methods to identify when agents hallucinate, fail, or strategically underperform.

sandbagging verification
Papers Q1 2026

AI Evaluation Science

Adversarial evaluation methods that resist gaming. Benchmarks that detect hidden capabilities and strategic behavior.

benchmarks adversarial
Papers Q1 2026

Memory Systems

Novel memory architectures for long-horizon agent tasks. Beyond RAG - smooth retrieval and natural temporal dynamics.

memory agents
Papers Q1 2026

Reasoning Verification

Verifying AI outputs without ground truth. Methods for code, plans, and decisions from reasoning models like o3 and R1.

verification reasoning
Active

Interpretability

Practical interpretability for production. Not "understand the model" but "should I trust this output?"

activation probing steering
Active

Multi-Agent Trust

Trust dynamics when agents coordinate with agents. Propagation, verification, and failure modes in multi-agent systems.

multi-agent trust
Active

Adversarial Robustness for Agents

Attack taxonomies, detection methods, and defenses for agentic AI systems. Beyond prompt injection — tool poisoning, memory corruption, planning attacks, and coordination exploits in multi-agent systems.

adversarial agents security
Active

Uncertainty Quantification

Calibrated confidence for AI decision support. Activation-based uncertainty estimation, propagation through reasoning chains, and calibration methods that work without ground truth.

uncertainty calibration decision-support
Active

World Models & Physical AI

Beyond language models: AI that understands and predicts the physical world. World models, embodied reasoning, simulation, and physics-aware AI.

world models embodied AI

Publications

Papers releasing Q1-Q2 2026. Technical disclosures and preprints available now.