Are Your Cloud Files Being Used to Train AI?
When you upload a file to a cloud service, you expect it to be stored securely and accessible only to the people you authorise. But in 2026, a far more uncomfortable question arises: are your files being used to train artificial intelligence models without your informed consent? For most consumer cloud services, the answer is that nothing technically prevents it, and the terms of service often explicitly allow it.
The WeTransfer controversy: a rude awakening
In 2025, a major scandal erupted around WeTransfer. Attentive users discovered a clause in the terms of service authorising the platform to use uploaded files for training artificial intelligence models. The clause, written in deliberately vague legal language, granted WeTransfer a broad licence over content passing through its servers.
The backlash was immediate and fierce. Creative professionals, lawyers, and privacy experts denounced the silent appropriation of millions of users' work. Photographers, designers, and architects who used WeTransfer daily to send their creations realised that those very creations could potentially feed competing AI systems.
Facing the outcry, WeTransfer retracted the controversial clause and issued a statement asserting that user files had never been used for this purpose. But the damage was done: trust was shaken, and a precedent had been set. If WeTransfer had dared to include such a clause, what about other services?
The problem is not limited to WeTransfer. Most consumer cloud services reserve broad rights over your data in their terms of service. The question is not whether they are using your files today, but whether anything prevents them from doing so tomorrow.
What the terms of service actually say
A careful examination of the terms of service from major cloud providers reveals clauses that should alarm any user concerned about data confidentiality:
- Google: Google's terms grant the company a licence to "use, host, store, reproduce, modify, create derivative works, communicate, publish" your content. Google states these rights are used to improve its services, which potentially includes training AI models.
- Microsoft: Microsoft 365's terms include similar clauses. Microsoft has faced criticism over the integration of Copilot, its AI assistant, which can access documents stored in OneDrive and SharePoint to generate its responses.
- Dropbox: in 2023, Dropbox faced a similar controversy when clauses relating to AI usage were spotted in its updated terms. The company clarified its position, but the content licence clauses remain broad.
The common thread across all these services is that your files are stored in clear text on their servers. The provider holds the encryption keys. Technically, nothing prevents them from accessing the content, analysing it, or using it for any purpose, including AI model training.
Opt-out clauses: an illusory safeguard
Some services offer opt-out mechanisms allowing users to refuse having their data used for AI training. But these mechanisms suffer from several fundamental problems.
First, they are typically buried deep within privacy settings, accessible only after several layers of navigation. The vast majority of users are unaware they exist. Second, opt-out is rarely retroactive: data already collected and used before the option is activated is not removed from training datasets. Finally, there is no technical guarantee that the opt-out is actually enforced. You must trust the provider, with no means of independent verification.
The adoption of AI tools integrated into cloud storage services is expected to double by the end of 2026. This growing integration makes the boundary between "service improvement" and "model training" increasingly blurred, and opt-out clauses increasingly inadequate.
The economics of free services
It is essential to understand the economic logic at play. Free cloud services are not charities. Their business model relies on monetising user data, whether through targeted advertising, improvement of paid products, or increasingly, AI model training.
Your files hold considerable value in this context. Professional documents, artistic creations, financial reports, medical records: each of these content types represents high-quality training data for specialised AI models. The temptation to exploit this data goldmine is immense, and the current legal and technical safeguards are insufficient.
If a service is free, you are not the customer: you are the product. In 2026, this maxim extends to your files. If they are stored in clear text by a cloud provider, they are a potential asset for AI training.
Zero-knowledge architecture: physical impossibility
Faced with these concerns, contractual promises and opt-out clauses are no longer enough. The only truly reliable guarantee is a technical one: zero-knowledge architecture.
In a zero-knowledge service, your files are end-to-end encrypted before they leave your device. The provider does not possess the decryption keys. It stores blocks of encrypted data that it is technically incapable of reading, analysing, or using for any purpose whatsoever.
Training an AI model requires access to file contents in clear text. If the provider cannot decrypt your files, it physically cannot use them to train an AI. This is not a matter of privacy policy or goodwill: it is a cryptographic impossibility.
This is the fundamental difference between a legal safeguard ("we promise not to use your files") and a technical safeguard ("we cannot use your files, even if we wanted to"). Only the latter offers a genuine guarantee.
ZeroTrustTransfer: your files remain yours
ZeroTrustTransfer is built on a comprehensive zero-knowledge architecture. Every file is encrypted using AES-256-GCM directly in your browser. Encryption keys are never transmitted to our servers. Kioroeya, the company behind ZeroTrustTransfer, has no access to the contents of your files: we cannot read them, analyse them, or use them to train any model.
This is not a contractual promise that could be quietly modified in a terms-of-service update. It is a verifiable architectural reality: our servers store only encrypted data, and the keys exist solely in the sharing link you provide to your recipient.
At a time when the line between cloud services and AI training data pipelines grows ever thinner, choosing a zero-knowledge service is no longer a niche concern for privacy enthusiasts. It is the only way to guarantee that your files remain yours. Discover ZeroTrustTransfer and take complete control of your data.