(27 Jan 2026) A new survey from Wolters Kluwer Health has found that unauthorized AI tools and applications, referred to as shadow AI, are being used across hospitals and health systems in the United States, including in situations involving direct patient care.
The survey indicates that this activity may raise issues related to patient safety, data privacy, and regulatory compliance. According to the findings, 40% of respondents reported encountering an unauthorized AI tool within their organization, and nearly 20% reported having used one themselves.
Survey responses show that doctors and administrators select AI tools to improve speed and workflow efficiency, and that the absence of approved alternatives may lead to the use of unapproved tools. The survey characterizes shadow AI as a governance issue rather than only a technical concern and notes potential implications for patient safety. The findings point to gaps in AI policies, the need for clearer compliance guidelines, and the importance of limiting clinical use to validated, secure, enterprise-ready AI tools.
Key healthcare shadow AI survey findings:
- Shadow AI is widespread in health systems: Forty percent of healthcare professionals reported encountering unauthorized AI tools in their workplace, and nearly 20 percent reported using them. Half of respondents cited faster workflows as a primary reason for use, while providers ranked curiosity and experimentation slightly higher than better functionality. One in 10 respondents reported using an unauthorized AI tool for a direct patient care use case.
- Gaps in policy development and awareness: Administrators were three times more likely than providers to be involved in healthcare AI policy development, at 30% compared with 9%, suggesting that policy ownership is concentrated within hospital administrative roles. Awareness levels differed, with 29 percent of providers reporting awareness of main policies compared with 17 percent of administrators.
- Widespread AI use and expectations of impact: More than half of healthcare professionals reported frequent use of AI tools or reliance on AI tools in their work. Nearly 90 percent reported agreeing or strongly agreeing that AI will significantly improve healthcare within the next five years. Data analysis was identified as the most common use case for both providers, at 60%, and administrators, at 78%.
- Patient safety as a leading concern: Patient safety was ranked as the top concern related to AI by providers at 25% and administrators at 26%. Administrators ranked patient safety first, followed by privacy and data breaches, while providers ranked inaccurate outputs as their second-highest concern.
- Concerns about health data security: Twenty-three percent of healthcare professionals reported concerns about privacy and security risks associated with AI in healthcare, including data breaches, unauthorized access, and the need for protective measures.
The survey was conducted online by CITE Research, Inc. on behalf of Wolters Kluwer Health in December 2025 among 518 healthcare professionals, evenly split between providers and administrators.
Click here to read the original press release.




