Documentation
API references, tutorials, and guides for all packages.
Steering & Control
Safety & Testing
rotalabs-probe
Sandbagging detection via activation probes. Detect when AI systems deliberately underperform.
View Documentation →rotalabs-redqueen
Evolutionary adversarial testing. Quality-diversity optimization for red-teaming AI systems.
View Documentation →Verification & Evaluation
rotalabs-verity
Neuro-symbolic verified code synthesis. Correctness verification with Z3 and CE2P feedback.
View Documentation →rotalabs-eval
LLM evaluation with statistical rigor. Confidence intervals, significance testing, and comprehensive metrics.
View Documentation →Infrastructure
rotalabs-cascade
Configuration-driven multi-stage decision routing. Trust-based verification cascades.
View Documentation →rotalabs-fieldmem
Field-theoretic memory systems. PDE-based memory with natural diffusion and thermodynamic forgetting.
View Documentation →rotalabs-graph
GNN-based trust propagation. Model trust relationships with graph neural networks.
View Documentation →rotalabs-accel
High-performance inference acceleration. Triton kernels, quantization, and speculative decoding.
View Documentation →Compliance & Audit
Quick Install
pip install rotalabs-steer # Activation steering
pip install rotalabs-probe # Sandbagging detection
pip install rotalabs-verity # Verified code synthesis
pip install rotalabs-eval # LLM evaluation
pip install rotalabs-redqueen # Adversarial testing
pip install rotalabs-cascade # Decision routing
pip install rotalabs-fieldmem # Field-theoretic memory
pip install rotalabs-graph # Trust propagation
pip install rotalabs-accel # Inference acceleration
pip install rotalabs-comply # Compliance
pip install rotalabs-audit # Reasoning capture
License
All packages are licensed under AGPL-3.0. For commercial licensing, contact [email protected].