Verity: Neuro-Symbolic Synthesis of Verified Distributed Systems
CE2P translates formal verification failures into structured LLM feedback. Benefit inversely correlated with model capability - weaker models gain 34-39pp.
Research on AI trust, verification, and agent reliability.
Continuous Dynamics for Context Preservation
Memory architecture treating stored information as continuous fields governed by partial differential equations. Semantic diffusion, thermodynamic decay, and field coupling for persistent, composable agent memory.
CE2P translates formal verification failures into structured LLM feedback. Benefit inversely correlated with model capability - weaker models gain 34-39pp.
Evolutionary approaches where attack and defense strategies improve through competitive pressure.
Formal framework for ROI-based decision routing in multi-tier verification systems.
Comprehensive threat model covering input, state, tool, planning, and coordination attacks. Empirical evaluation across agent architectures.
Interpretability methods to detect adversarial attacks on agentic systems. Extends metacognitive probing to security.
Uncertainty quantification tracking actual accuracy. Activation-based estimation and propagation through multi-step reasoning chains.
Defensive publications by the founding team during prior work at Google. Published via TD Commons, CC BY 4.0.
We welcome collaborators across AI safety, interpretability, formal methods, and adversarial ML.
Get in touchDeploy trust infrastructure in production. All packages are open source, with enterprise support via Rotascale.
View packagesAll core research, benchmarks, and tools are open source. Issues, PRs, and discussions welcome.
GitHubType to search across all pages and posts
Press ↑ ↓ to navigate, Enter to select