Home | Blogs | About | Contact

Transform into a data security culture

Risk of unintentional human data leaks to genAI platforms?
by Stephan Steiner

Last week sparked a compelling dialogue within my teams focused on AI innovation development and privacy and security compliance. As we forge ahead with new AI innovation applications, we’re confronted with the pressing issue of inadvertent leakage of sensitive company data to public AI tools by employees, and the imperative to mitigate this risk. While this concern isn’t novel, it demands renewed attention, as its resolution requires a holistic strategic approach transcending mere technological solutions.

Damaging impacts:
The World Economic Forum warns of a potential surge in third-party data breaches in 2024, following a staggering 72% increase in breaches in 2023 compared to the preceding year. Projections indicate that forthcoming breaches will predominantly target major tech firms housing extensive customer data, including sensitive information. While these assaults may not directly attack AI infrastructure, they exploit vulnerabilities in network APIs and other venues. Moreover, the proliferation of cloud applications with sensitive data and integrations with multiple third-party vendors compounds the array of potential breach points.

While the repercussions of data breaches vary, IBM’s 2023 Cost of a Data Breach Report reveals an average cost of USD $4.45 million, representing a 2.3% uptick from 2022. This financial toll doesn’t encapsulate the damage inflicted on a company’s brand reputation, long-term shareholder value, or loss of customer trust.

Can we resist the forbidden apple?
The advent of generative AI behemoths like OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Co-Pilot, and Grok, coupled with widespread enthusiasm and curiosity for generative AI, have prompted a surge in personal account registrations to self-experience these services. After all, curiosity in this novel technology is only human. Notably, OpenAI’s ChatGPT boasts approximately 100 million weekly users and 1.63 billion visits in February 2024, with a minuscule fraction opting for the paid ChatGPT Plus subscription (<1m paid subscribers as of this writing). However, as generative AI tools permeate mainstream usage, individuals are increasingly, and inadvertently blurring the lines between personal and professional content, potentially exposing sensitive company data during their exploration. For instance, employees may utilize these tools to rewrite sections of sensitive documents, providing customer information, sales or pricing data, competitive analyses, product details, or a snippet of code, all while unwittingly augmenting public AI models with proprietary data.

While many companies boast robust data protection mechanisms against external threats, the human factor remains a key vulnerability, as published by Verizon’s 2022 Data Breach Investigation Report. Ransomware breaches surged by 13% in 2021, and a staggering 82% of cyber breaches involved human error or malfeasance. Yet, the inadvertent exposure of sensitive data to public generative AI providers remains an unreported threat vector. How long will it take for unsavory characters to find out information about your company on open AI-platforms which they should have no access to? And in coming years I’m sure there will see the emergence of bad actors trying to find and sell your confidential information in public genAI platforms.

Mitigation through culture transformation:
To address these challenges, companies must revisit their pasture towards data security. Most have a solid approach how to comply with regulatory requirements from regulations like GDPR, HIPAA, CCPA, and LGDP. And as part of the security management protocol, employees are asked to complete educations or sit through information sessions on a regular basis. But how many weave security into their company culture? Most of these sessions are seen as a “burden” or a “must do”, akin to filing annual tax returns, rather than being actively lived in daily life.

In today’s rapidly evolving digital landscape, the emergence of personal genAI accounts presents a critical consideration that reaches far beyond the confines of traditional IT or Cyber Security departments. Below are some recommendations to get started:

  • Engage the C-suite, as the implications extend to every corner of the organization — do not simply push this to just “a team”.
  • Establish clear ownership and involve experts to craft a robust strategy. Ensure there is a methodology to review and adapt the policy as technology changes.
  • Delineate which services or accounts fall within acceptable boundaries, drawing clear lines between personal and corporate usage.
  • Establish a team or service to monitor (and report) what sensitive data of your company might be “in the open” and define enforcement and mitigation mechanisms.
  • Implement the policies with top-down support to pave the way and provide sufficient resources for change management teams.
  • Transform into a “data security culture” mindset and organization.

In the end, the transformation to close the “human security gap” and become a “security by design culture” will take time – but given that your company’s sensitive data can be stored and used by public genAI platforms for eternity, it will be well worth the effort.

Where are you on this journey? Please comment or drop me a line.

#StephanSteiner #AIdatasecurity #AItraining