Why establishing AI safeguards to reduce risks in the prior authorization process is important

As a family physician, I have navigated the burdensome process associated with prior authorization. Prior authorization denials, which often stem from varying health plan policy requirements and submission errors, contribute to care delays. As a result, many stakeholders seek advanced automation technologies to address the shortcomings in a system plagued by outdated and overly complex prior authorization processes.

However, leaning too heavily on automation raises concerns when it pertains to reviewing, approving, or denying prior authorization requests. Every patient is unique, and we learn during our extensive medical training how to adjust treatments to account for these nuances. Computer software cannot fully understand the intricacies of medicine, and it is imperative that physician oversight is included when utilizing automation.

Responsible automation and mitigating bias in AI algorithms

Many industry professionals are rapidly adopting artificial intelligence (AI) solutions and machine learning to expedite the delivery of high-quality care and optimize administrative workflows. While these cutting-edge tools hold immense potential to enhance organizational efficiency, streamline operations, and revolutionize patient outcomes, it is vital to ensure that AI-driven technology is not only accurate but adequately governed.

As physicians and health plans increasingly integrate AI into their operations, a new concern emerges regarding potential overreliance on AI for crucial decision-making, particularly in optimizing legacy prior authorization procedures.

To manage this risk, AI must operate under clinical oversight but does not require physicians to repeat all AI’s work. Health plans must not rely solely on AI; physicians should verify decisions for accuracy, particularly in denial cases. This combined approach maintains efficiency and decision accuracy.

A national endeavor

Responding to an increase in improper prior authorization denials, the federal government put forth a series of proposals aimed at compelling health plans, such as Medicaid, Medicare Advantage, and national Affordable Care Act marketplace plans, to accelerate their prior authorization verdicts and furnish more comprehensive explanations for any rejections. The proposals are set to commence in 2026 to mandate that programs address regular PA inquiries within seven days, a reduction from the existing 14-day period. Additionally, urgent requests would need to be answered within 72 hours.

The need for such changes has prompted many national agencies to take action, especially regarding AI’s role in PA approvals and denials. The American Medical Association (AMA) adopted a new policy emphasizing the need for increased AI accountability and underscored the necessity of more rigorous evaluations by medical professionals and clinical experts. Without such considerations and the frameworks to ensure the responsible and effective integration of healthcare automation technology, patients could suffer adverse consequences.

AI’s primary function should focus on streamlining processes to expedite positive health outcomes and guide physicians toward optimal treatment options, irrespective of the prior authorization decision.

The core principles of responsible AI

The effectiveness of AI depends on the quality of input data, necessitating the recognition of its inherent biases and limitations to ensure responsible usage. Addressing these concerns and following four key considerations enables AI to help physicians deliver high-quality, value-based care and improve patient outcomes.

  1. Transparency: AI-driven decisions must be grounded in clinical data, and transparent practices are vital in minimizing the risk that AI models will recommend inappropriate denials.
  2. Privacy & Security: Safeguarding sensitive patient information necessitates clinical oversight. AI models for PA requests should exclude patient identifiers, relying solely on critical treatment data such as type, date of care, and diagnosis.
  3. Accountability: Developing responsible AI involves a strong partnership between clinical experts and software engineers to guarantee that AI model creation, assessment, and refinement are guided by specialized knowledge in the field.
  4. Inclusiveness & Equity: Patient care is influenced by social determinants, which underscores the importance of ensuring that at-risk patients impacted by such factors are not subject to automatic denial. Aligning AI models with specific health plan policies maintains consistent standards, prevents erroneous care denials, and upholds equity and expert judgment across patient populations.
A healthier future

The urgency of establishing and embracing ethical and responsible AI in health care is becoming increasingly evident. Its potential extends beyond diagnosis and treatment, promising to vastly improve patient experiences and health outcomes while upholding patient privacy and data security. By championing responsible AI alongside advanced clinical innovation and oversight, health care is charting a course toward a more patient-centric, precise, and compassionate health care system.

Published On: August 28th, 2023Categories: AI/ML National Media, News

Share:

About the Author: Mary Krebs, M.D., FAAFP

Dr. Krebs serves as the Medical Director of Primary Care at Cohere Health. She earned her medical degree from the Ohio State University College of Medicine in Columbus and completed a family medicine residency at Miami Valley Hospital in Dayton, Ohio. She also teaches residents and medical students at a family medicine residency program. Previously, Dr. Krebs was in solo practice at a rural federally-qualified health center and co-ran Family Practice Associates, an independent rural practice.