Before You Even Type…
Using AI tools like Microsoft Copilot, ChatGPT, or similar platforms might feel harmless. But here’s the reality: when you use them, you’re not just getting help, you’re handing over your data to a third party. That data can be retained, analysed, and even used to train future AI models.
Worst-case scenario?
Client data — project details, credentials, breach narratives — could be absorbed by the model and re-surface in someone else’s session. All it takes is the right prompt from the wrong person.
If you wouldn’t post it on a public forum, don’t paste it into AI tools.
1. You paste a client’s incident report for “editing help”
It includes their name, a ransomware timeline, screenshots of mail rules, and internal IPs.
Now, it’s in the training data of a third-party AI platform. You’ve breached NDA, data handling policy, and possibly contract terms. If the client finds out, expect consequences.
2. You drop a database schema into an AI chatbot
The schema includes client names, admin tables, and MFA status.
This isn’t debugging — it’s an unintentional data export.
You’ve handed critical infrastructure details to an external platform governed by foreign laws.
3. You deploy AI-generated code directly to production
Looks clean, runs fine… until it deletes public DNS records.
AI tools don’t test, warn, or validate output.
They will confidently hand you a flawed script that wipes a live environment.
Safe examples:
Drafting a generic USB security policy
Summarising redacted documentation
Explaining technical concepts in plain English
Formatting basic scripts — with no client context included
Not safe:
Diagnosing client systems
Troubleshooting production configurations
Generating reports using actual client names
Asking for recommendations based on sensitive documents
Loss of client trust and contracts
Possible regulatory fines
Mandatory public disclosures
Internal investigations and disciplinary action
This isn’t theoretical. We’re already seeing these scenarios play out across the industry.
AI tools can be powerful, but they’re also risk multipliers if misused.
Think of them like a smart intern: great at making suggestions, but they don’t understand context, responsibility, or consequences. You wouldn’t give an intern your master password or ask them to write a client’s breach report unsupervised.
Treat AI tools the same way.
It doesn't belong in an AI tool if you wouldn’t post it on LinkedIn or share it in a client meeting.