Technology

AI data leaks soar as employees bypass company rules

As generative AI continues to infiltrate the modern workplace, its use is triggering alarm bells around data security and compliance. A recent TELUS Digital Experience survey conducted in January 2025 revealed that 57% of employees at large US enterprises admitted to entering confidential company data into tools like ChatGPT, Google Gemini, and Microsoft Copilot.

Perhaps more concerning is the prevalence of personal account usage: 68% of respondents accessed GenAI assistants through non-corporate platforms, effectively sidestepping IT and security protocols. This phenomenon, dubbed ‘shadow AI’, is widening the cracks in already fragile data governance frameworks.

Two out of three office workers are using AI tools without formal company approval. In both markets, the absence of strong regulatory oversight has created a wild west of AI experimentation – one where sensitive data is casually handed over with little thought to the consequences.

What's at stake: sensitive data in the firing line

Harmonic Security’s research found that 8.5% of AI-generated prompts contain sensitive data. Within this, customer data makes up the lion’s share at 46%, followed by employee information at 27%, and legal or financial records at 15%.

Why the breach in judgement? A mix of productivity pressure, vague policies, and poor risk awareness. Many employees are unaware that their prompts could be stored and used to train AI systems, creating a risk of data leaks. Without proper training or oversight, these digital shortcuts can quickly become organisational headaches.

Speed versus safety: the productivity trade-off

The appeal of AI lies in its ability to take the heavy lifting out of daily tasks – drafting documents, summarising reports, streamlining workflows. But using personal accounts comes at a price. Employees are lulled into a false sense of privacy, wrongly assuming that their data is deleted or walled off from prying eyes. With inconsistent enforcement and minimal accountability, policy violations often go unchecked.

Security policies are more suggestion than rule

Despite many firms drawing a line in the sand against feeding sensitive data into AI, employees continue to cross it. TELUS’s survey found:

  • 31% entered personal details like names, addresses, emails, and phone numbers
  • 29% shared information about unreleased products and internal projects
  • 21% disclosed customer records including order histories and communications
  • 11% inputted financial data such as revenue, margins, and forecasts

Yet only 29% of respondents said their organisations had clear AI guidelines, and just 24% had undergone mandatory AI training. Alarmingly, 50% didn’t know whether they were compliant, and 42% reported no consequences for flouting rules. It’s a case of all policy and no policing.

AI use continues to climb

Security concerns aside, AI tools are becoming workplace staples. Among surveyed employees:

  • 60% said AI helped them complete tasks faster
  • 57% reported greater efficiency
  • 49% noticed better performance overall.

Overall, 84% were keen to keep using AI at work.

Supporters praise GenAI’s role in creative ideation and automating repetitive tasks. But security leaders remain cautious, pointing to serious risks around data sovereignty, intellectual property theft, and compliance breaches.

5 data types commonly shared with AI

Understanding the specific data types shared with AI tools reveals critical vulnerabilities:

1) Customer data

Employees often use AI to help process claims, summarise customer communications, and manage feedback. But these shortcuts could land firms in hot water with privacy watchdogs, especially under laws like the GDPR.

2) Employee records

HR teams risk exposing performance reviews, payroll, and personal identifiers – sensitive data that is highly regulated and potentially damaging if leaked.

3) Financial and legal data

Using AI for editing contracts or translating documents might save time but could accidentally expose earnings reports, merger details, or regulatory filings.

4) Security credentials

A small but troubling number of employees are entering passwords, encryption keys, and network access credentials, essentially handing hackers the keys to the kingdom.

5) Proprietary code

Developers tapping AI for debugging or code generation could unintentionally share trade secrets, leaving the door open for intellectual property theft or competitive sabotage.

Big firms draw a line

Companies like Apple, Amazon, and Deloitte are taking a firm stance, restricting or banning tools such as ChatGPT internally.

This cautious approach reflects growing recognition that when it comes to data, the cost of carelessness can be sky-high.

Six ways to stay ahead of AI risk

Smart organisations are moving fast to plug the leaks. Key strategies include:

1) Establishing clear AI usage policies: Spell out what is and isn’t acceptable when using AI tools in the workplace.

2) Mandatory AI security training: Equip employees with the knowledge to use AI responsibly and avoid risky behaviours.

3) Restricting sensitive data inputs: Create hard rules against entering customer, employee, and financial data into public AI models.

4) Avoiding legal and compliance use cases: Keep AI out of sensitive regulatory workflows and contract analysis.

5) Prohibiting the sharing of code or credentials: Protect business-critical assets by banning uploads of passwords, encryption keys, and proprietary code.

6) Updating AI governance regularly: AI evolves quickly, so policy updates need to keep pace with emerging threats and best practices.

The line between innovation and insecurity is thin. Organisations must act swiftly to strike a balance between harnessing AI’s potential and protecting their most valuable asset: trust.

Browse more in: