📧 AI Data Security & Privacy | 05.02.24

Hi there,

Ed here.

Just because your organization hasn’t given explicit instructions on how sensitive customer and company data can be utilized in AI tools, it doesn’t mean you can just “wild west” it and hope for the best.

That kind of behavior is best left for Stakeholder communication and Gantt charts! (I’m joking, of course.)

Companies have been painfully slow to adopt standards and practices for AI use. That’s just the nature of most companies with new technology. 📠

That’s a recipe for disaster when mixed with innovative project managers. 🧑‍🍳

As project managers, we have access to a broader set of data than probably most other people in our companies since we are involved in almost every aspect of our businesses.

I think we need a stark reminder that as we increasingly leverage AI tools in our projects, ensuring the privacy and security of sensitive data is more critical than ever. 📊🛡️

The Risks of AI and Data Privacy

AI does fantastic things with raw data and reference materials.

However, without proper precautions, AI tools could potentially expose or misuse this proprietary company data or personally identifiable information (PII).

Beyond the ethical implications, data breaches can lead to legal consequences like GDPR violations and erode stakeholder trust.

As project managers, we must protect our customers' private information and our companies' confidential data while leveraging AI's benefits responsibly.

Safeguarding Customer Information

Customer data like full names, contact details, financial information, and purchase histories must be kept private. Before feeding any customer data to AI models, take steps to remove or conceal sensitive details:

  • Replace real names with pseudonyms

  • Remove emails, phone numbers, and other contact info

  • Only provide the minimum customer data required

  • Only include necessary proprietary company data when absolutely necessary

Additionally, be transparent about your AI usage by updating privacy policies and obtaining proper permissions through explicit consent from customers.

Protecting Proprietary Company Data

Confidential business information, such as financial reports, product roadmaps, market strategies, and source code, also requires careful handling.

I approach company data the same way I approach customer data. I clean the data before I feed proprietary information into a Large Language Model (LLM=AI).

As a best practice, clearly label and isolate confidential data from public training data sources. This means removing or changing specific names, places, and other information that could inadvertently leak into the model’s training data (more on that in a minute).

Be aware of data privacy laws like GDPR and CCPA that restrict the processing of personal data, as well as ethical AI principles around consent, transparency, fairness, and accountability.

Collaborate early and often with your company’s legal, security, and ethics experts.

Conduct risk assessments and data protection impact analyses for AI projects.

Have processes to address AI bias, explain decisions, and allow for human overrides when needed.

Proactively Secure the Data You Share with OpenAI!

As I mentioned earlier, you should try to ensure that the data you enter into an AI tool isn’t included in its future training data (this is a grey area; Google “AI training data lawsuits” if you need help falling asleep tonight).

Ethics and law aside, you should at least do your best to protect your data.

As an example, here’s how to request the information exchanged with ChatGPT isn't included in OpenAI’s training set:

OpenAI provides a privacy request portal where you can view past requests and submit new ones to access or delete your data:

From there, you can submit a request to exclude your conversations from OpenAI’s training data explicitly:

They certainly don’t make it easy!

But that tells me there’s value for OpenAI in having access to our conversations. So take active steps to take back control of your data!

Take Action!

I encourage you to review your current data handling practices for potential risks from AI adoption.

Work closely with your IT, security, and legal teams to properly classify data and implement proper technical, operational, and governance safeguards.

Be proactive about building secure and responsible AI practices from the start.

I look forward to your experiences balancing innovation with data privacy and protection.

Let me know if you have any other questions!

P.S. Don't miss my upcoming LinkedIn Live Office Hours on Wednesday, May 8th. I'll be talking more about responsible AI implementation and taking your questions. Keep an eye out for the link in your inbox!

Value Added Resources:

OK. That’s all. Talk to you next week.

-Ed

If you found this newsletter valuable, please consider sharing it with a friend or colleague.

Did someone forward this email to you? 👉 Subscribe Here.

AI Disclosures: The content of this email was written mostly by me, Ed, a human. This email contains some AI-generated content to clarify the more technical details.