📧 Are You Accidentally Leaking Data with AI? Here's How to Protect It.

Hi there,

Ed here.

Just because regulatory bodies haven’t yet given explicit instructions on how sensitive customer and company data can be utilized in AI tools, it doesn’t mean you can just “wild west” it and hope for the best.

Companies have been painfully slow to adopt standards and practices for AI use. That’s just the nature of most companies with new technology.

That’s a recipe for disaster when mixed with innovative business owners. 🧑‍🍳

As business owners, we have access to a broader set of data than most other people in our companies since we are involved in almost every aspect of our businesses.

I think we need a stark reminder that as we increasingly leverage AI tools in our business endeavors, ensuring the privacy and security of sensitive data is more critical than ever. 📊🛡️

The Risks of AI and Data Privacy

AI does fantastic things with raw data and reference materials.

However, without proper precautions, AI tools could potentially expose or misuse this proprietary company data or personally identifiable information (PII).

Beyond the ethical implications, data breaches can lead to legal consequences like GDPR violations and erode public trust.

As the Principles of our businesses, we must protect our customers' private information and our companies' confidential data while responsibly leveraging AI's benefits.

Safeguarding Customer Information Customer data like full names, contact details, financial information, and purchase histories must be kept private. Before feeding any customer data to AI models, take steps to remove or conceal sensitive information:

  • Replace real names with pseudonyms

  • Remove emails, phone numbers, and other contact info

  • Only provide the minimum customer data required

  • Only include necessary proprietary company data when absolutely necessary

Additionally, be transparent about your AI usage by updating privacy policies and obtaining proper permissions through explicit consent from customers.

Protecting Proprietary Company Data

Confidential business information, such as financial reports, product roadmaps, market strategies, and source code, also requires careful handling.

I approach company data the same way I approach customer data. I clean the data before I feed proprietary information into a Large Language Model (LLM=AI).

As a best practice, clearly label and isolate confidential data from public training data sources. This means removing or changing specific names, places, and other information that could inadvertently leak into the model’s training data (more on that in a minute).

Be aware of data privacy laws like GDPR and CCPA that restrict the processing of personal data, as well as ethical AI principles around consent, transparency, fairness, and accountability.

  • Collaborate early and often with legal, security, and ethics experts.

  • Conduct risk assessments and data protection impact analyses for AI projects.

  • Have processes to address AI bias, explain decisions, and allow for human overrides when needed.

Proactively Secure the Data You Share with OpenAI (ChatGPT)!

As I mentioned earlier, you should try to ensure that the data you enter into an AI tool isn’t included in its future training data (this is a grey area; Google “AI training data lawsuits” if you need help falling asleep tonight).

Ethics and law aside, you should at least do your best to protect your data.

As an example, here’s how to request the information exchanged with ChatGPT isn't included in OpenAI’s training set:

OpenAI provides a privacy request portal where you can view past requests and submit new ones to access or delete your data:

Use this link to submit a data privacy request: [Submit a Data Exclusion Request to OpenAI]

From there, you can submit a request to exclude your conversations from OpenAI’s training data explicitly.

They certainly don’t make it easy!

But that tells me there’s value for OpenAI in having access to our conversations. So take active steps to take back control of your data!

Take Action!

I encourage you to review your current data handling practices for potential risks from AI adoption.

Work closely with your IT, security, and legal teams to properly classify data and implement proper technical, operational, and governance safeguards.

Be proactive about building secure and responsible AI practices from the start.

I look forward to your experiences balancing innovation with data privacy and protection.

Let me know if you have any other questions!

OK. That’s all. Talk to you next week.

-Ed

Ed Krystosik - Co-founder of RAC Projects AI,
an AI Automation Agency for Business Growth

P.S. Whenever you’re ready, here’s how I can help grow your business:

  • If you’re reading this online, join over 1,861+ business owners receiving regular emails helping them build, automate, and grow their business with AI.

  • If you’re struggling to grow your business, create content, generate & manage leads with AI, click HERE to learn more about RAC Projects AI, our full-service AI Business Process Automation agency.

Hit reply and let me know what you found most helpful this week—I’d love to hear from you!

Keep building.