AI & Non-Determinism
Handling models, prompt versions, and separating logic from AI outputs.
AI & Non-Determinism
Our platform relies heavily on probabilistic models and generative workflows to process complex data. We structure our architecture to embrace the probabilistic nature of AI while keeping our underlying platform perfectly stable and predictable.
Hard Boundaries for AI
We explicitly separate AI orchestration from deterministic domain logic.
- We utilize Domain Ports to interpret AI output.
- While our AI models generate actions or suggestions, the pure domain logic comprehensively validates the output structure and ensures all business invariants are met before committing any change to our databases. This boundary allows ML engineers to experiment with models rapidly without jeopardizing core system integrity.
Versioning Prompts & Pipelines
We treat prompts precisely as we treat application code.
- We manage all prompts and configuration versions within strictly typed adapters or dedicated configuration repositories.
- Updating an underlying model or rewriting a prompt triggers our standard CI pipelines and peer review workflows, providing rigorous scrutiny over our AI outputs.
Evals over Unit Tests
We leverage rigorous Evaluation Pipelines (Evals) to measure the quality of our non-deterministic outputs.
- Because generative outputs cannot be easily asserted in a boolean unit test, we utilize offline eval batches to evaluate output accuracy against golden datasets using deterministic quality indices.
- We integrate concise, high-impact subsets of these evals into our automated pipeline to ensure we confidently ship AI optimizations without introducing output regressions.