Recommendations include frameworks for those designing AI and human interpretability.
Leading U.K. medical experts wrangled with how to successfully apply AI in the health care sector, with various voices raising concerns over biases and data privacy.
At a Westminster Forum Policy conference, various speakers agreed that AI should be used as a tool to compliment clinicians but warned that oversights and issues need to be addressed.
Johan Ordish, head of software and AI at the Medicines and Healthcare Regulatory Agency (MHRA) said that digital health technologies, including AI, are “not the answer but part of the answer.”
They might be part of an answer, but they also bring in new challenges, he said, adding that the sector needs to address the challenges AI presents while also seizing the opportunities it provides.
Ordish stressed the need to ensure data problems are recognized as safety problems as well as the need for clarification on how broad the medical requirements are for software and AI.
He also called for evidencing AI and human interpretability to be addressed in any potential deployments.
“There is an opportunity to get things right – AI is necessary to address key challenges our health system is facing but MHRA can’t do this alone,” he said, adding that getting AI right requires collaboration with academics and stakeholders.
Challenges include the “considerable” backlog of 300,000 waiting longer than a year for care, according to U.K. Government figures. Prior to the pandemic, this number stood at just 1,600. Moreover, the number of people waiting for elective care in England now stands at six million, which is up from 4.4 million before the pandemic.
Lack of regulatory clarity and patient education
AI deployments in the U.K. medical sector are sparse, with Professor Adnan Tufail, the consultant ophthalmic surgeon at Moorfields Eye Hospital suggesting we have yet to see the realization of its use in the National Health Service (NHS).
One use case discussed during the event was deploying AI scanning systems in the detection of various cancers during screening.
Hologic General Manager Tim Simpson said that AI has improved breast cancer detection as well as increased capacities in cervical cancer screening via digital cytology.
“With COVID backlogs, we need to consider using all the tools that are available,” Simpson said, adding that deployments need to be used alongside clinicians in a “side-by-side approach.”
He added that patients also need to accept the use of new technologies; it becomes necessary to educate the public that deployments have the potential to speed up diagnoses.
But Simpson also warned of a lack of regulatory clarity around deployments: “This is a barrier that needs addressing, we still need further consensus and need that relatively quickly.”
Data practices and the U.K. as an AI ‘proving ground’
To create a responsible AI innovation ecosystem in health care, data deficits and algorithmic bias were among the challenges that speakers said needed addressing.
“We need to think about data carefully when we think about the role that responsible and trustworthy AI might play in clinical medicine,” said Dr. David Leslie, ethics theme lead at The Alan Turing Institute. “We can’t simply trust that data quality will arise from clinical practice.”
Leslie warned that deficient methodological interoperability leads to uneven data quality across clinical environments, labs and health systems.
He also suggested that poor or variable sensor quality could generate measurement inaccuracies.
Leslie was one of several speakers that described the U.K. as a “proving ground” for AI deployments in health care.
He said the U.K. has “proven itself to be a pacesetter in terms of setting the right tone.”
Going forward, he called for a potential “bottom-up approach” in setting best practices and regulatory frameworks.
Bias frameworks
Biases stemming from AI were another major talking point. Mavis Machirori, a senior researcher at the Ada Lovelace Institute suggested making AI system designers commit to a code of conduct if they wanted to deploy in the health care space. She said that health care professionals made such commitments to practice.
Bristows partner Alex Denoon said such a plan was “beyond incomparable.”
He argued that those who design and write the code for AI would need an entire regulatory framework to govern their work.
Machirori responded by saying diversity was essential. “if this is not changed in the practice of health care, then AI is just making things worse,” she said, adding, “saving costs doesn’t always translate into saving patients.”
Dr. Nicola Byrne, the U.K.’s national data guardian for health and social care, went on to recommend integrating humanity into AI health care to try and elevate some of the issues.
COVID impact: Pandemic ‘muted the dinosaurs’
Dr. Richard Roope, a primary care advisor to the charity Cancer Research U.K. and clinical adviser to the Royal College of General Practitioners, said that GPs were not afraid of AI.
“GPs don’t feel threatened by AI, it’s there to assist us. If the gain is sufficient, we’re willing to put up with the pain,” he said.
Roope went on to say that the pandemic “muted the dinosaurs,” or tech-resistant clinicians. “The will to change happened at pace,” he said, adding that changes that would have taken years instead took weeks.
As for reluctance to use AI in health care, Ankit Modi from Qure.ai said that professionals will “always perform better” when working with AI, compared to AI or humans on their own.
“AI in health care is already here and deployed,” he concluded.
Written by Ben Wodecki and republished with permission from AI Business.