Our use of AI is intentionally limited, transparent, and designed to support — not replace — human expertise and clinical judgement.
These are not aspirations. They are operational commitments reflected in our internal AI governance policy and enforced across our engineering practices.
No patient data in AI systems. Ever. Under any circumstances.
No training on your data. Information you provide to Synapto is never used to train or fine-tune any AI model.
Synthetic data only. AI-assisted development workflows operate strictly on programmatically generated test data — not de-identified records, not anonymised PHI.
Clinical pipeline is AI-free. FHIR transformation, LOINC mapping, and ICD-10 encoding run on deterministic, version-controlled code — no inference layer.
We are explicit about the boundary between AI-assisted internal work and AI-free clinical infrastructure.
Our role is to build reliable, compliant infrastructure. The bridge between your clinical systems and national health mandates must be deterministic, auditable, and human-attested — not probabilistic. That principle shapes every decision we make about AI.
We're happy to discuss our AI governance in detail — in procurement conversations, security reviews, or compliance assessments. Our internal AI Use Policy (POL-AI-001) is available on request.