Transparency Report · 7 min read · April 8, 2026
EU Servers ≠ GDPR Compliance.
The Reviewer Was Right.
A public reviewer called us out this week. The critique was sharp and mostly correct. Here is the whole thing, verbatim, and what we changed in the product within 48 hours.
The critique
“EU servers ≠ GDPR compliance. If your documents are still sent to US providers like OpenAI or Anthropic, that's a third-country transfer with real privacy implications. What happens if the AI gets it wrong? Can you appeal? Will a human expert actually review your case? These are basic questions before uploading sensitive data. Long story short: Terms of Use misses some points.”
What the reviewer got right
Hosting primary storage in Amsterdam is table stakes — it is not a GDPR compliance posture by itself. Under GDPR Articles 44–49, every time a document payload leaves the EU to reach a US-resident AI provider, that is a third-country transfer. It needs a lawful transfer mechanism (Standard Contractual Clauses, and where applicable the EU–U.S. Data Privacy Framework), a documented Transfer Impact Assessment, and it needs to be disclosed honestly — not buried.
And the Article 22 question is the one that matters most for a document verification product: if an automated decision produces legal or similarly significant effects — a job offer rescinded, an admissions application rejected, a visa interview derailed — the data subject has a right to obtain meaningful information about the logic, express their point of view, and contest the decision with a human in the loop. That is not optional, and “email us if there's a problem” is not a serious answer.
What we changed this week
Four concrete, shippable things — all live as of today. Nothing in this section is aspirational.
1. A visible “Request human review” button on every verdict
Every result page now carries a one-line GDPR Article 22 disclosure and a Request human review button. One click opens a pre-targeted dispute form against that specific verification. Before today, the dispute right existed in the Privacy Policy but was effectively invisible. Now it is where you can actually see it — at the moment of the decision, not buried in a PDF.
2. A real reviewer queue with a 30-day SLA
Disputes now land in an operator dashboard with age badges (pending disputes turn red after seven days, well inside the 30-day legal ceiling), a resolve modal with three outcomes — upheld (the AI was wrong and the verdict is corrected), rejected (the AI stands with documented reasoning), or withdrawn — and signed reviewer notes for audit. Upheld disputes feed our calibration overrides and genuinely change how the next verification runs. The appeal is not theatre.
3. Honest third-country transfer disclosure in Terms of Use
Our Terms of Service v1.3 now names our AI sub-processors in the data protection section — currently including Anthropic and OpenAI — and states plainly that document content is transmitted to them for inference, that these are third-country (United States) sub-processors, that they are covered by Standard Contractual Clauses and where applicable the EU–U.S. Data Privacy Framework, and that we run them under zero-retention API configurations. The full register lives on our Trust & Sub-Processors page, and additions are announced there at least 30 days before they go live.
4. An EU-resident inference roadmap we will actually be held to
The honest answer on data residency is: for standard tenants today, inference calls leave the EU under SCCs. For the next release, we are routing EU-tagged traffic through AWS Bedrock (eu-central-1) for Anthropic and Azure OpenAI EU regions for GPT. That eliminates the third-country transfer for customers who select the EU tier — not by marketing copy, but by the region string on the actual API call. The tradeoffs are real: slightly higher latency, a lag on the newest model versions, and more infrastructure complexity. We think those are worth paying for regulated buyers.
Why we are writing this publicly
Because the alternative — quietly ignoring a public critique, patching the weakest sentence of the Terms of Use, and hoping no one noticed — is exactly the behaviour that makes people distrust SaaS privacy promises in the first place. The reviewer is the user. The critique is free product feedback. The right response is to agree, ship, and show the receipts.
If you are considering Turing Verify for a workflow where verdicts will affect people's jobs, admissions, visas, or licenses, these four changes are for you. Try the product on the free tier. File a dispute on any verdict you disagree with. Watch it land in a queue that a human actually reads. Then decide whether you trust us with the next document.
Published April 8, 2026 by the Turing Verify team. Questions: [email protected].