Deep trust issues remain with AI, even as it becomes more widespread in clinical settings


Original post, click here

Health system decision-makers see artificial intelligence as a new fact of life, even if many are skeptical and even fearful of what it could mean for safe care delivery, a new survey from Intel shows.

Intel and Convergys Analytics asked 200 healthcare leaders in the U.S. about their thoughts on AI as it continues to evolve, impact process and transform decision-making in hospitals, predicting readmissions and flagging potential safety issues.

Most think machine learning has big potential to drive quality improvements and save money through increased efficiency, but many also harbor worries that it could also pose risks to patient safety.

[Also: Next-gen telehealth: AI, chatbots, genomics and sensors that advance population health]

Perhaps surprisingly, Intel shows that health systems are using AI in clinical use settings far more than they are for operational or financial applications. (77 percent of respondents, compared to 41 percent an 26 percent, respectively.)

And most think it’s benefits are clear: 91 percent say AI-enabled predictive analytics will help make for more timely interventions and 88 percent say AI will improve care.

But more than 33 percent of respondents are concerned about patients’ perception of AI, and nearly as many say clinicians have similar skepticism, and say the chance for serious medical errors is the biggest risk.

It’s clear, however, that AI is here to stay. More than half respondents expect widespread adoption of AI within the next five years.

So the Intel report suggests health systems do a better job demystifying the technology, focusing on making it work in specific use cases (such as those that don’t impact patient safety) to make clinicians more comfortable with it and offering regulatory feedback to policy makers.

Twitter: @MikeMiliardHITN
Email the writer: mike.miliard@himssmedia.com