Advertisement
UK markets closed
  • NIKKEI 225

    38,471.20
    -761.60 (-1.94%)
     
  • HANG SENG

    16,248.97
    -351.49 (-2.12%)
     
  • CRUDE OIL

    85.33
    -0.08 (-0.09%)
     
  • GOLD FUTURES

    2,413.50
    +30.50 (+1.28%)
     
  • DOW

    37,817.23
    +82.12 (+0.22%)
     
  • Bitcoin GBP

    50,413.28
    -1,365.89 (-2.64%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • NASDAQ Composite

    15,885.06
    +0.04 (+0.00%)
     
  • UK FTSE All Share

    4,260.41
    -78.49 (-1.81%)
     

Global AI and Trust in Healthcare Research Report 2022: Validation of Models, Explainability and Transparency, & Data Ethics and AI

Company Logo
Company Logo

Dublin, Nov. 23, 2022 (GLOBE NEWSWIRE) -- The "2022 AI and Trust in Healthcare Report" report has been added to ResearchAndMarkets.com's offering.

The"AI and Trust in Healthcare"report examines the growing role of AI in healthcare and the underlying factors that can both harm and help build trust by end users of products and services that utilize artificial intelligence.

The report also proposes an intra-industry consortium to address some of the critical areas that are central to patient safety and building an ecosystem of validated, transparent and health equity-oriented models with the potential for beneficial social impact.

ADVERTISEMENT

Trust is becoming a form of social capital in the healthcare AI arena that demands a comprehensive approach. A number of flawed algorithms have entered the market and have been found to include bias and lack of reproducibility or transparency. This has the impact of damaging trust; more needs to be done to foster safe and validated algorithms that can improve outcomes, health equity and clinical work burdens.

Tools and processes have been developed to address bias and these need to be supported by building diverse data science teams. A number of technological tools and checklists have been developed to address racial and gender bias in algorithms. These tools can be adapted to healthcare and built upon.

More cooperation across the industry is needed to create processes for Good Algorithmic Practices across use cases and the lifecycle of algorithms. The FDA has fallen behind and does not address the entire spectrum of algorithms. Industry consortia are urgently needed to act as "Consumer Reports" on algorithms and create certification processes across the various stages of the lifecycle.

Over the past several years, AI has become one of the most discussed technologies in society. With the potential to determine who gets what form of medical care and when, the stakes are high with AI algorithms if they are not deployed with care. Already we have seen many examples of algorithms containing bias with respect to race and gender enter the market, and there are many clinical decision support tools being used that still have problematic science behind them.

A review of clinical algorithms currently in use across multiple specialties found a rather large number of cases where race correction was used inappropriately. Earlier this year we discussed additional cases in our podcast episode with Dr. Tania Martin-Mercado, who highlighted the case of the algorithm used for kidney disease, the glomerular function algorithm, which results in African-Americans waiting longer for kidney transplants.

During the first year of the COVID-19 pandemic, hundreds of algorithms were developed to aid in diagnosis through analyses of x-rays and CT scans. One study showed that none of these algorithms were reproducible. The reproducibility crisis in AI in medicine has the potential to undermine trust in AI products by both providers and patients. Princeton University researchers have recently held a workshop and released a white paper on the extent of this problem in machine learning including many examples in medicine.

The "AI and Trust in Healthcare" report also provides an overview of some of the challenges in building AI models for healthcare and medicine, the tools and processes that can be used to address problems such as bias and drift and the steps companies can take to build trust by following both good data science and intentional efforts to build diverse teams capable of addressing the multiple axes of bias.

Finally, the solution to these problems requires more than the attention of individual companies. The FDA and regulatory environment have fallen behind in addressing the challenges that confront a rapidly growing technology with high stakes. The author proposes consortia around the various use cases for AI that would provide a more transparent and scientifically rigorous approach to certifying algorithms, after they are assessed for validation, data governance, bias, explainability and impact on health equity.

In addition to the consortia for AI in healthcare the analyst examines a recent proposal that calls for the use of liability insurance in AI for healthcare to drive adoption of the highest quality algorithms. The certification process that AI consortia would develop could work in tandem with the insurance industry to certify vetted algorithms that would receive lower premiums for going through the certification process.

Readers of our report will learn about the state-of-the-art processes for bias and risk mitigation that draws upon work developed within government and think tanks with programs focused on bias and AI. We link these processes to some emerging data science work on the complexity of digital health data. This will be of use to both data scientists and executives interested in developing innovative machine learning tools that have a reduced risk of doing harm.

Key Topics Covered:

Executive Summary

Introduction

  • Trust: A Social Currency

Chapter 1: Why Trust Matters Now and Ethical Guidelines

Chapter 2: Validation of Models

  • Does the model perform at a scale beyond the original training set?

  • Dimensionality in digital health data

  • Bias mitigation

  • Key Takeaways on Validation and Bias

Chapter 3: Explainability and Transparency

  • Emerging Critiques of Explainable AI

  • Rebuttal to the XAI Critique: ClosedLoop.AI Case Study

  • Key Takeaways on Explainability

Chapter 4: Data Ethics and AI

  • Conclusion: Health Equity and the Value of Third-Party Standards

  • Organizations for Building an Innovative, Trustworthy Ecosystem

  • Trust as an Intangible Asset: Building an Innovative Ecosystem

  • AI Liability Insurance and Consortia

  • Recommendations

About the author

Appendix: Case Studies for Building Responsible AI

  • Case Study 1 Design and Evaluation of AI - Including the Users in Project Design

  • Sepsis Predictive Model Development at UC Health

  • Case Study 2RPA and Data Integrity Case Study

Companies Mentioned

  • ClosedLoop.AI

  • UC Health

For more information about this report visit https://www.researchandmarkets.com/r/cs7ihx

CONTACT: CONTACT: ResearchAndMarkets.com Laura Wood,Senior Press Manager press@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470 For U.S./ CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900