In 2020, lung cancer, the foremost cause of global cancer-related deaths, took 1.7 million lives, surpassing fatalities from the next three deadliest cancers combined. Early detection significantly improves the five-year survival rate, increasing from 10 per cent in advanced cases to around 70 per cent. However, lung cancer is often stigmatised as self-inflicted, hindering open discussions. The rising incidence of lung cancer in non-smokers emphasises the need for improved screening for all. This is where promising new AI tools like MIT’s Sybil, aims to detect lung cancer in its early stages, making it more treatable.
Low-dose computed tomography (LDCT) scans are the primary method for lung cancer screening. With Sybil, this process goes beyond traditional screening, predicting a patient’s risk of developing lung cancer within six years without the need for radiologist assistance. This advancement in AI technology holds promise for proactive and personalised healthcare in lung cancer prevention and treatment.
Sybil is just one example among thousands where healthcare AI is demonstrating success in applications ranging from diagnostics to personalised medicine. AI is also extending its impact beyond diagnosis. Penn Medicine’s AI chatbot, Penny, assists cancer patients by providing guidance and support through text messages, showcasing a 70 per cent adherence rate in medication intake.
Despite these advancements, concerns loom over the biases and pitfalls in AI-driven cancer detection. For instance, studies reveal that AI systems designed for skin cancer diagnosis exhibit significant racial bias, being less accurate for individuals with darker skin tones. The regulatory frameworks currently in place struggle to keep pace with the dynamic nature of AI, prompting the need for a reimagined rule book. What are the risks that call for immediate regulation?
Related: Technology shift that is transforming healthcare practice
Health equity and AI
AI’s ability to analyse vast amounts of data, coupled with machine learning, positions it as a catalyst for emerging fields like personalised medicine. “AI, particularly chatbots, can increase access, improve patient experiences, and even save time and money,” says Jessica Roberts, the Director of the Health Law and Policy Institute at the University of Houston Law Center. However, she poses a crucial question: How can we ensure equitable distribution of these benefits without exacerbating existing healthcare disparities?
Inequity in healthcare access and outcomes predate AI. Despite AI’s potential for positive change, there is a simultaneous risk of unintentionally perpetuating existing inequalities. Roberts stresses the necessity of human supervision to prevent harmful outcomes such as biased data in AI. “The accuracy of AI-generated assessments is paramount in medical contexts,” notes Roberts.
As society increasingly relies on AI for medical assessments, she emphasises the need to ensure that outputs are accurate and free from biases. Roberts succinctly captures a fundamental concept: “It’s the idea of ‘garbage in, garbage out’ — poor quality inputs generate poor quality outputs,” highlighting the potential reinforcement of disparities in healthcare.
Can regulation address AI bias?
To maintain ethical standards, AI needs to be integrated as decision support for human experts at best. Roberts raises questions about whether existing anti-discrimination laws adequately address unintended biases in AI. She advocates for laws explicitly covering both intentional and unintentional discrimination by AI in healthcare. The way generative AI systems are trained could result in privacy, trust, safety, low interpretability, bias, misuse, and over-reliance risks. Some of the challenges of generative AI are that the system could change the way virtual assistants interact with their patients and that there is no umbrella legislative framework governing the space.
A potential way to address these issues is “Big Data Affirmative Action”, says Roberts. This involves using a second corrective algorithm to address discriminatory outcomes in an initial algorithm. For example, it could detect disparities in how physicians diagnose cardiovascular disease in women. Once discrimination rates are identified, the corrective algorithm can be used to rectify the inaccuracies in the initial algorithm’s results.
“Addressing bias in AI requires a comprehensive approach. As the healthcare industry embraces technological advancements, we must ensure that AI benefits all without perpetuating disparities,” Roberts adds.
With some hospitals contemplating the adoption of generative AI systems and others already immersed in their development, the need for robust guidelines is more pronounced than ever, says Dr. Thurayya Arayssi, Professor of Clinical Medicine and Vice Dean for Academic and Curricular Affairs at Weill Cornell Medicine-Qatar’s (WCM-Q).
“Even though generative AI will provide support to the healthcare sector, the applications pose legal and ethical challenges, for instance, when ChatGPT generates texts and information, who owns that information, or if a clinician relies on the information from generative AI and something goes wrong, who will be held responsible?”
Related: AI-powered endoscopy innovations for global health equity
Regulatory challenges: a global perspective
Healthcare AI grapples with outdated regulatory models. Current frameworks clash with AI’s adaptable nature, and as regulators race to catch up, the challenge lies in creating a regulatory environment that fosters innovation without compromising patient safety. To prevent technology from harming the public, the world must agree on a regulatory baseline, where legislation is possible, and how it can be implemented. However, regulating AI in healthcare faces global challenges. The European Commission recently passed the Artificial Intelligence (AI) Act, which categorising AI risk and focuses on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions in various sectors from healthcare and education to finance and energy. In the US, the FDA introduced an action plan that integrates AI and machine learning-based software into existing medical device frameworks. More recently, the Biden administration issued an executive order encouraging AI developers to disclose their safety test results and other key information to the federal government.
The Dubai Health Authority’s proactive regulatory and ethical requirements for AI solutions in healthcare aims to enhance collaboration between government health agencies, the private sector, and the scientific community. This AI policy has been formulated according to best clinical practices and emerging research, says Dr. Mahira Abdel Rahman, Information and Smart Health Policy Officer at the Dubai Health Authority. The policy mandates that all AI solutions for healthcare comply with the international and federal information laws, regulations and guidelines of the UAE and Dubai, especially regarding human values, patient privacy, people’s rights and professional ethics, in the long and short term. The policy also focuses on the need for AI healthcare solutions to be safe, secure, and subject to supervision and monitoring by professional users to ensure technology is used to empower the health sector and patients. “Through this policy, we seek to benefit from the capabilities of artificial intelligence to ensure smart management, work with high efficiency and enhance productivity in the health field,” she adds.
As the regulatory landscape evolves, legal teams and industry players must navigate complexities. Pioneering regulation introduces short-term compliance burdens but can offer clarity, reduce litigation risks, and instil confidence in the technology. Regulating AI in healthcare presents a complex challenge that demands a delicate balance between encouraging innovation and ensuring patient safety.
Back to Technology