'Shadow AI' Continues to Lurk in Healthcare Settings

Unauthorized AI tool use persists across medical workplaces as data privacy and patient safety risks mount

As tech companies race to make AI tools as standard-issue as stethoscopes, the technology has already penetrated virtually every corner of the healthcare industry.

But a significant share of that usage remains in the shadows — ungoverned by institutional oversight and rife with security and patient safety risks, experts warn. The diagnosis from across the industry is consistent: the adoption of AI tools is happening faster than healthcare organizations can write policies.

헬스케어 현장에 여전히 도사리는 ‘섀도우 AI’
섀도우 AI 확산으로 의료진의 번아웃 해소와 업무 효율 향상 욕구가 폭발하는 한편, 병원 거버넌스·보안 체계 밖에서 미승인 AI를 쓰는 관행이 널리 퍼지며 데이터 유출과 환자 안전 리스크가 동시에 커지고 있어
Korean Version

What Is Shadow AI?

Shadow AI refers to the use of artificial intelligence tools — particularly large language models (LLMs) — outside of official institutional oversight and approval processes. In healthcare settings, this typically manifests as clinical staff using consumer AI chatbots to draft SOAP notes or clinical summaries, deploying AI code assistants without oversight, or uploading confidential patient data to public generative AI platforms like ChatGPT, Claude, or Gemini.

The phenomenon is an evolution of the longer-standing "shadow IT" problem. Common instances of shadow IT include the use of personal cloud storage, unauthorized messaging apps, or unvetted project management platforms that fall outside an organization's approved ecosystem but still handle sensitive data or communications.  Shadow AI, however, introduces an entirely deeper layer of risk.

Several converging factors have made 2026 the defining year for shadow AI in healthcare. Advanced AI capabilities are now accessible to anyone with a web browser or smartphone. Physicians spend one to two hours on EHR documentation for every hour of direct patient care — and facing burnout and administrative overload, clinicians are seeking any solution that promises relief, even unauthorized ones. While major health systems are deploying enterprise ambient AI scribes, many smaller practices, rural clinics, and individual providers lack access to compliant alternatives, creating a "have and have-not" divide that drives shadow usage.

Nearly One in Five Healthcare Workers Admit to Unauthorized AI Use

The scale of the problem has been thrown into sharp relief by new data. Wolters Kluwer Health commissioned a December 2025 survey of over 500 healthcare workers — half administrators and half providers — and found that 17% admitted to using unauthorized AI tools in the workplace. More than 40% of medical workers and administrators said they were aware of colleagues using shadow AI products.

Among those who confirmed using unapproved AI tools, 45% cited faster workflows as their primary reason, while 24% said it was because the unauthorized tools offered better functionality than currently approved options.

Alex Tyrrell, SVP and CTO of Wolters Kluwer's health division, told Healthcare Brew that healthcare workers aren't necessarily breaking the rules intentionally — they may simply not have a clear idea of what tools are allowed, or how tech companies use data inputted into AI systems for training purposes.

"As these tools become more ubiquitous, as we become familiar with them and use them in our daily lives, there's the potential to kind of blur the line when you're in a workplace setting, particularly in a regulated environment," Tyrrell said.

Making matters more complicated, virtually every workplace platform has now introduced a wave of new AI features that may not even be clearly labeled as such. "Suddenly there's a new button, and that new button may be AI-driven, and it may not have gone through the same vetting process," Tyrrell warned. "That's almost another new avenue or new vector for shadow AI."

A Double Threat: Data Breaches and Patient Safety

The risks of shadow AI in healthcare fall into two distinct but overlapping categories: data privacy violations and patient safety.

On the data side, the numbers are already alarming. A 2025 IBM report found that the average security breach in the healthcare industry totaled over $7.4 million, and 97% of organizations with AI-related security incidents lacked proper AI access controls.  Across all sectors, 20% of surveyed organizations suffered a breach due to security incidents involving shadow AI — a figure seven percentage points higher than incidents involving sanctioned AI. Organizations with high levels of shadow AI reported breach costs that were $200,000 higher on average, and shadow AI displaced the security skills shortage as one of the top three costliest breach factors.

The broader cybersecurity picture is equally troubling. The healthcare sector experienced twice as many breaches in 2025 as it did in 2024, driven by ransomware attacks and third-party risk, with many intrusions now threatening operations more than data privacy.  Shadow AI is adding yet another attack surface to an already embattled industry.

Andy Fanning, co-founder and CEO of healthcare company Optura, was blunt about the privacy stakes: "If you upload 100 claim files into ChatGPT base — just your normal ChatGPT — they're training on that data," he told Healthcare Brew.

On the patient safety front, when large language models hallucinate, they can produce incorrect but highly convincing information that finds its way into patient records, coding, or treatment decisions.

About a quarter of providers and administrators ranked patient safety as their top concern surrounding AI in healthcare. Among administrators at larger health systems with over 25,000 employees, 57% ranked data breaches as a top-two concern — nearly double the overall average of 30%. r

"Shadow AI may be the biggest data exfiltration risk we've ever faced because it doesn't look like an attack — it looks like productivity. When your organization's data enters an external AI platform, it's no longer under your control. Shadow AI doesn't just leak data; it donates it to someone else's model. Once uploaded, it cannot be retrieved or deleted."

The Root Cause: Unmet Need in a Resource-Constrained Industry

Fanning argued that shadow AI is fundamentally a symptom of institutional failure, not individual recklessness. "Shadow technology at its core is an unmet need," he said. "The cause really is that we've been limited on technology budgets for years. There's a lot of technical debt underneath. It's pretty complicated to implement new things. They're really just trying to keep the lights on in most of these organizations."

This framing is borne out by the survey data. Workers aren't seeking to undermine their employers — they're seeking relief from crushing administrative burdens in an industry that has chronically underinvested in technology infrastructure.

2026: The Year of Governance

Industry experts have declared 2026 "the year of governance." Health system C-suites are playing catch-up to clinicians who have rapidly adopted generative AI applications, and are now being forced to rethink AI governance models and implement more formalized organization-wide frameworks to ensure responsible use — including proper training and appropriate guardrails to maintain compliance.

Jessica Lamb, a partner at McKinsey focused on healthcare, told Healthcare Brew that as sanctioned adoption has increased, shadow AI has become less of a problem, with many organizations now having clear policies in place. "At this point, most organizations have some sort of enterprise large language model that they are comfortable with, and have been a bit more clear about the guardrails of what you can and can't do," she said.

The most forward-thinking organizations are beginning to explore "AI safe zones" — controlled environments where providers and administrative staff can safely experiment with approved AI tools and datasets — as a proactive strategy to contain shadow usage ahead of emerging state-level regulations.

Experts broadly recommend that rather than blocking AI outright, organizations should establish visibility frameworks that identify when and where employees are using AI tools, detect large or unusual data uploads, and educate staff on safe prompting techniques that minimize exposure. Executives, they say, "must treat AI governance as a core business initiative."

Tyrrell recommends that IT teams perform regular audits of browser extensions, integrations, and applications — especially anything capable of data processing where protected health information (PHI) may be exposed. Employees should also understand that IT approves software for specific use cases, not as blanket permissions.

As more tech companies — including Anthropic and OpenAI — roll out tools tailored specifically to healthcare providers and patients, Tyrrell cautioned that even well-intentioned new entrants have the potential to introduce "dramatic confusion in the landscape."

"There are just so many things to keep track of," he said. "It's not just, 'We got a license to an approved tool; it went through sourcing and procurement, it seems safe.' You also have to think about: What is the end use? What is the role? Who is the person that's going to be using this tool?"

※ This article is based on Patrick Kulp's original report "'Shadow AI' continues to lurk in healthcare settings," published by Healthcare Brew on February 19, 2026, supplemented with additional research from Healthcare Dive, TechTarget, Wolters Kluwer Health, Fortified Health Security, IBM, and Security Magazine. The original article is available at healthcarebrew.com.