How it works
Evidor sits between your firm and the AI providers. Every prompt is classified, redacted, routed, and audited.
This page walks through what happens to a single prompt from the moment your team writes it to the moment the answer lands back in your environment. You can read it as a CEO and forward it to your IT and compliance leads without changing the document.
What this means in plain English: your team can use AI tools, and you can show a regulator, an auditor, or a client exactly what those tools saw and did.
Step 01 — Classify
How do you know what is in the prompt before it leaves your firm?
When your team writes a prompt, through a chat window, a custom app, or an embedded workflow, Evidor reads it first. It tags the prompt by sensitivity: general, client-confidential, regulated personal data, competitively sensitive. Those tags decide what happens next.
The sensitivity tag drives the redaction, the routing, and the audit detail. Your firm sets the tags and the policy that flows from them. Evidor defaults are designed for legal, healthcare, financial, and regulated industries; nothing is hardcoded.
- Classification accuracy: Number pending CTO confirmation
- Cross-checked against an independent multi-model evaluation.
Under the hoodA custom-trained transformer classifier. Sensitivity tags are configurable per firm.
Step 02 — Redact
How do you stop sensitive material reaching the model in the first place?
Before the prompt leaves your environment, Evidor finds the sensitive material, names, identifiers, matter numbers, account numbers, anything your policy flags, and swaps it out for structured placeholders. The external model only ever sees the swapped version.
When the model answers, Evidor puts the original material back inside your environment before the answer reaches your team. The external model never holds the real data.
Detection is high confidence on structured entities. On free-form text where sensitive content is not clearly shaped as a name or number, residual risk exists and we describe it openly. We do not claim zero leakage.
Under the hoodIndustry-standard detection extended with sector-specific patterns. The key that reverses the redaction stays in your environment, encrypted with the same standard your bank uses.
Step 03 — Route
Why does the model the prompt reaches depend on the prompt, not on who is logged in?
The sensitivity tag from Step 01 decides which model receives the prompt. A general question may go to a fast cheap model. A client-confidential question may be locked to a specific provider with a contractual data-handling addendum. Regulated personal data may be routed only to a model running inside your private network. Your firm writes the policy; Evidor enforces it on every request.
This is the design Evidor is patenting. The patent describes how the routing decision is made when a prompt sits between two sensitivity zones, and how the system handles the cases that are not obvious.
Under the hoodProvider-agnostic. Today we route to OpenAI, Anthropic, AWS Bedrock, OpenRouter, and any model addressable through a standard proxy. Adding a provider is a configuration change.
US provisional patent 63/885,250.
Step 04 — Audit
When a regulator asks what your AI did with their data, how long does it take you to answer?
Every event is recorded in a tamper-evident log: the prompt, the sensitivity tag, what was redacted, which model received it, what came back, and when. Each log entry is cryptographically linked to the one before it, so a quiet edit to any past event would visibly break the chain from that point onward.
When a regulator, client, or auditor asks "what did your AI do with this data on this date", your team exports an Evidence Pack, a structured bundle of the relevant events, the original prompts, the redaction map, and the proof that the chain is intact. Minutes to produce, not days.
The chain detects modification. It does not prevent destruction. If someone with the right access deletes the whole log, the absence is itself an audit signal, and the copy that streams to your security team's log system survives. We call this tamper-evident, not tamper-proof.
Under the hoodEvery event streams in real time to your existing security log system in the standard formats your team already ingests.
How we differ
Evidor is not Vanta. It is not LangChain. It is not Cloudflare AI Gateway.
-
vs Vanta / Drata
Vanta and Drata are org-level compliance posture frameworks. Evidor is the runtime evidence trail. Vanta tells your auditor your company is SOC 2-ready; Evidor proves it for every individual AI request.
-
vs Microsoft Purview
Purview governs data at the document and email layer, designed for the Microsoft 365 stack. Evidor is provider-agnostic and sits at the AI request hop, works with OpenAI, Anthropic, Bedrock, and any model behind LiteLLM, regardless of where your data lives.
-
vs LiteLLM
An open-source proxy used to connect to multiple AI providers, fast and well-loved, but ships zero security or compliance posture. Evidor sits on top of LiteLLM (we use it as our routing substrate) and adds the things a regulated firm actually needs: classification, redaction, tamper-evident audit, evidence-pack export.
-
vs LangChain
LangChain is an SDK for building agents, it lives in the application code. Evidor is the infrastructure those agents call through. You keep LangChain. You add Evidor as the gateway your LangChain agents talk to.
-
vs Cloudflare AI Gateway
Cloudflare AI Gateway is a hosted CDN-level proxy with caching and observability, cloud-only, vendor-controlled, and your data flows through Cloudflare's infrastructure. Evidor is self-hostable in your own VPC (so the data never leaves your security boundary), and ships a tamper-evident hash-chained audit primitive Cloudflare AI Gateway does not.
Honest about what we do not yet do
These are the limits. Read them before you ask us anything we cannot back.
- Production-grade scale today. M1 is a single-host prototype. We do not quote throughput numbers or concurrent-user ceilings.
- Battle-tested multi-tenancy. Logical isolation only at M1. Cross-tenant penetration testing is on the M2 roadmap.
- Zero data leakage. Presidio has known blind spots; we describe redaction as high-confidence with documented residual risk.
- Real-time behavioural anomaly detection. Phase 2 work, not in M1. M1 is data-leakage prevention.
- Production key management. AES-256-GCM is verified in source, but persistent KMS-backed key rotation is M2 hardening work.
- Compliance certifications. SOC 2 Type II and ISO 42001 are on the readiness path, not issued.
Saying these out loud is part of how we earn trust. The next thing we ship moves these limits, in measurable steps your team can verify.
Next
If you want to dig further, two paths.
Apply to the Discovery Partner programme
A four-month structured exploration with our team. Three slots, regulated firms only.
Read the programme detailsBook a call with the founder
30 minutes with Anson Zeall. No deck unless you ask. If we are not the right fit, we will say so.
Open scheduler