Popular Doesn’t Mean Secure - The 2025 State of Shadow AI Report Findings

.png)
In 2025, generative AI is being used everywhere within companies, often without proper approval. This has created a trend called shadow AI. People in different departments rely on AI for their work: marketing teams link AI to customer databases, engineers use it to fix code, and HR might upload sensitive employee data to insecure AI platforms.
This creates gaps in security that traditional tools can’t handle.
Ignoring shadow AI isn’t just a theoretical risk; it’s costly. Companies with high levels of shadow AI have faced data breaches that cost an average of $670,000. Recent research shows the urgency of the problem: 71% of office workers admit they use AI tools without approval from their IT departments, and almost 20% of businesses have already experienced data breaches or leaks because of unauthorized AI use.
You Can’t Secure What You Can’t See
The principle behind managing AI risks today is simple: you can’t protect what you can’t see. Shadow AI refers to AI tools that are used without approval or oversight, which means security teams often can’t see or control them. Not all unapproved AI tools are automatically unsafe, some might even seem secure on the surface, but what makes them risky is the lack of visibility and control. If you don’t know a tool is being used, you can’t check its security, manage how sensitive data is handled, or respond to any problems it might cause.
This leads to a growing number of unknown risks in your system.
An employee can start using a new AI app in just a few minutes, but it might take the security team months to notice and review it (if they ever do). By the time they check it, the tool could already be a key part of daily work. That’s why the first and most important step in managing hidden AI risks is having visibility. Companies should consider using dynamic SaaS security platforms to find these unnoticed AI apps. These platforms can track network activity, SaaS logs, browser extensions, OAuth authorizations and do a lot more.
In this article, we’re going to dive into some of the most compelling findings from our 2025 State of Shadow AI Report, which looks at these issues in depth.
Popular ≠ Secure: The AI App Misconception
A dangerous misconception that has taken hold in a lot of organizations is that widely used AI applications must be safe or enterprise ready by default. The 2025 State of Shadow AI Report reveals the opposite. The ten most prevalent shadow AI apps in use were found to have alarmingly poor security. Three of the worst offenders, Jivrus, Happytalk, and Stability AI, received failing security grades, lacking fundamental protections like encryption, multi-factor authentication (MFA), and audit logging.
In other words, employees have been entrusting company data to popular AI tools with gaping security holes. And it’s not just those three: an additional seven AI applications were rated high risk for weak encryption, no data retention policies, poor access controls, and other issues – making a total of ten widely used AI tools that are actively endangering corporate data. This disconnect between popularity and security is a serious blind spot.

CreativeX and Otter.ai, two AI apps that each gained thousands of enterprise users, have security scores so low they should be disqualified from any enterprise environment, yet employees flocked to them for their rich features and convenience, not realizing the lack of security vetting. A lot of users choose AI tools based on cool features or buzz, not on security vetting. The result is a popularity trap: a tool’s widespread usage gives false confidence about its safety, while in reality that very popularity turns it into an enterprise-wide vulnerability.

To counter this, security leaders should guide their organizations toward secure-by-design AI options before the risky ones take root. Popularity does not equal protection. One practical step is publishing a pre-approved list of vetted AI tools and use cases, steering well-meaning employees toward safer alternatives. Most staff just want to get their job done with the best tools available; by proactively offering trusted, vetted AI solutions for common tasks, companies can prevent employees from unwittingly adopting the latest trendy app that secretly has poor security.
All Eggs in One Basket - Overdependence on OpenAI
Another insight from the report is the outsized dependence on a single AI vendor across many enterprises. OpenAI’s services account for 53% of all shadow AI usage in the studied organizations, processing data from over 10,000 enterprise users, which is more usage than the next nine AI platforms combined. In effect, roughly half of all AI driven risk in these companies flows through a single platform.
This creates a classic single point of failure.
Any security incident, data leak, API compromise, unexpected policy change, or even extended outage at OpenAI could simultaneously disrupt or compromise half of the organization’s AI workflows. Relying so heavily on one vendor also diverts attention from the shadow within the shadow (the long tail of smaller AI tools quietly spreading under the radar).

While everyone’s focused on ChatGPT, other platforms like Perplexity AI, Synthesia, Valence, and Blip are expanding their footprint in the shadows, often ingesting sensitive data without IT’s awareness. Security teams can’t afford to fixate on one popular service and ignore the rest.
To mitigate this risk, organizations should implement OpenAI-specific protection mechanisms and monitoring, given that platform’s dominant role. Enforce data classification and handling rules for any interaction with OpenAI, define approved use cases, provide training on safe use, and consider enterprise licensing options that give more visibility and control. At the same time, broaden your discovery efforts to encompass the full spectrum of AI tools in use, not just ChatGPT. The goal is to avoid having all your eggs in one basket.
When Shadow AI Takes Root - Long Term Entrenchment
It’s a mistake to think of shadow AI as a short term, transient experiment that employees toy with and then abandon. The reality is much more troubling: once unsanctioned AI tools prove useful, employees tend to keep using them, often for the long haul.
Our report found that many shadow AI apps are far from fleeting novelties. In fact, two particular tools (CreativeX and System.com) had median usage durations of about 403 and 401 days respectively, well over a year of continuous use without formal approval or oversight. In practice, after even 100+ days of continuous use, an AI tool is no longer a trial, it’s embedded in core business processes and daily workflows. At that point, trying to rip it out isn’t just an IT task, it’s a potential business disruption.

Imagine telling a team to suddenly stop using a tool that has become fundamental to their productivity - you’re likely going to meet serious resistance. The longer a shadow AI tool lurks in use, the harder it becomes to eliminate, and the more security debt piles up with each passing day.
Organizations should identify any unsanctioned AI tools that show extended use (e.g. more than 60 or 90 days) and decide how to handle them. If a tool has proven its value to users, consider officially approving it with proper security controls and monitoring in place. If it’s too risky, migrate those users to a safer, sanctioned alternative before the tool becomes irreplaceable. The longer you wait, the more business critical (and harder to remove) these shadow tools become.
Small Organizations, Outsized Vulnerabilities
A final insight from the 2025 report is a paradox. Smaller organizations are facing disproportionately large shadow AI risks. One might assume that tech giants and large enterprises (with tens of thousands of employees) have the biggest shadow AI problems in absolute terms, and they do have a lot of occurrences, but the highest concentration of shadow AI is actually in small and mid-sized businesses.
Companies with just 11–50 employees showed the densest usage, averaging 269 unsanctioned AI tools per 1,000 employees (roughly 27% of employees actively using them). More than one in four employees at a small firm might be using some AI app without IT’s knowledge. Even mid-sized organizations (500–1,000 employees) had about 200 shadow AI tools per 1,000 users in the study – an enormous per capita exposure that isn’t far behind the smallest companies.

Why are smaller firms so exposed?
Generally speaking, smaller companies have minimal (or zero) dedicated security staff and lack formal tooling or policies to rein in unauthorized IT. In effect, everything is permitted by default because there’s no one watching. This creates a perfect storm of high AI adoption and low security oversight. When over a quarter of your workforce is using unapproved AI tools, but you have no shadow IT discovery in place and no AI usage policies, every one of those employees becomes a potential entry point for cybercriminals.
For resource constrained organizations, the guidance is to focus limited security efforts where they matter most. Rather than trying to boil the ocean, identify the highest risk AI activities or tools and lock those down first. This might mean restricting use of known dangerous apps and whitelisting only a handful of trusted AI services for employees to use. Smaller teams might even take a default deny stance on AI SaaS access, allowing only pre-vetted tools, until they have the capability to monitor a broader range of applications.
Take Your Next Steps With Reco
The rise of shadow AI has changed the way businesses think about risk in a big way.
People are using AI without permission quite a lot, and it's not going away. It's also creating new attack surfaces that can't be ignored. The risk is both wide and deep. It includes insecure apps that are hiding in your environment, popular AI services that become single points of failure, stealthy long-term deployments, and has a big effect on small businesses. But this shouldn't stop people from doing anything or make them scared for no reason; it should make them act.
Security leaders need to make shadow AI a top priority for their organizations and bring it out of the shadows. If you thought these insights were useful, you might want to read our full 2025 State of Shadow AI Report for more in depth analysis and suggestions. Don't put off taking action. Ask for a demo to see how our platform can help you find hidden AI tools, figure out how risky they are, and take back control of shadow AI in your environment.

Nir Barak
ABOUT THE AUTHOR
Nir Barak is the Principal Data Engineer & Architect at Reco. He has deep expertise with implementing scalable systems that handle billions of events a day.