PR Archives:  LatestBy Company By Date

Press Release -- July 9th, 2024
Source: next-dlp

Caught in the Shadows: Security Pros Admit to Using Unauthorized SaaS and AI (Despite the Risk)

Research Reveals One In Ten Security Professionals Admit To Having Suffered A Data Breach As A Result Of Shadow SaaS, But Still Regularly Use Unauthorized Tools

Next DLP (“Next”), a leader in insider risk and data protection, today revealed that nearly three-quarters (73%) of security professionals admit to using SaaS applications that had not been provided by their company’s IT team in the past year. This is despite the fact that they are acutely aware of the risks, with respondents naming data loss (65%), lack of visibility and control (62%), and data breaches (52%) as the top risks of using unauthorized tools. Adding to this, one in ten admitted they were certain their organization had suffered a data breach or data loss as a result.

A survey of more than 250 global security professionals, conducted at RSA Conference 2024 and Infosecurity Europe 2024, also revealed that despite having a laissez-faire attitude towards Shadow SaaS, security professionals have taken a more cautious approach to GenAI usage. Half of the respondents highlighted that AI use had been restricted to certain job functions and roles in their organization, while 16% had banned the technology completely. Adding to this, 46% of organizations have implemented tools and policies to control employees’ use of GenAI.

“Security professionals are clearly concerned about the security implications of GenAI and are taking a cautious approach,” explained Next DLP’s Chief Security Officer (CSO), Chris Denbigh-White. “However, the data protection risks associated with unsanctioned technology are not new. Awareness alone is insufficient without the necessary processes and tools. Organizations need full visibility into the tools employees use and how they use them. Only by understanding data usage can they implement effective policies and educate employees on the associated risks.”

The research also provided a snapshot of how security professionals view their organization’s training and overall understanding of the risks of Shadow SaaS:

  • 40% of security professionals do not think employees properly understand the data security risks associated with Shadow SaaS and AI.
  • Yet, they are doing little to combat this risk. Only 37% of security professionals had developed clear policies and consequences for using these tools, with even less (28%) promoting approved alternatives to combat usage.
  • Only half had received guidance and updated policies on Shadow SaaS and AI in the past six months, with one in five admitting to never receiving this.
  • Additionally, nearly one-fifth of security professionals were unaware of whether their company had updated policies or provided training on these risks, indicating a need for further awareness and education.

“Clearly, there is a disparity between employee confidence in using these unauthorized tools and the organization’s ability to defend against the risks,” added Denbigh-White. “Security teams should evaluate the extent of Shadow SaaS and AI usage, identify frequently used tools, and provide approved alternatives. This will limit potential risks and ensure confidence is deserved, not misplaced.”

For further insights into the survey results, please see the full results report linked here. Or, for more information about Shadow SaaS and AI, and the possible defenses, visit the Next DLP website.


The survey of more than 250 global security professionals was conducted at RSA Conference 2024 and Infosecurity Europe 2024. Each respondent was asked the same ten questions surrounding Shadow SaaS and Shadow AI usage within their organization, the implied security risks, and the policies and security tools their company has in place.

About Next DLP:


PR Archives: Latest, By Company, By Date