Search Dental Tribune

Using AI to address oral healthcare inequities requires wide-ranging collaboration

Improvements to data sources and regulation are needed to promote equitable use of artificial intelligence in oral healthcare. (Image: PeopleImages.com - Yuri A/Shutterstock)

Fri. 9 August 2024

save

NASHVILLE, Tenn., US: Oral disease disproportionately affects low-income and minority populations. The use of artificial intelligence (AI) is increasing in oral healthcare and shows great promise in revolutionising dentistry. However, there is little research on its potential application to addressing equity in oral healthcare and research. A new review article by US researchers explores the inequities and biases in oral healthcare and highlights the potential of AI in addressing these challenges, specifically by augmenting oral health practitioners’ capabilities to achieve personalised care that is unbiased and transparent.

Inequities in oral healthcare arise from a complex interplay of factors, including affordability, accessibility, availability and discrimination. Additionally, oral health services are often treated as separate to general healthcare and in many countries relegated to the private sector, further limiting accessibility and affordability.

AI presents significant potential to address inequities in oral healthcare, but must be applied responsibly, and this hinges on anticipating and managing risks, according to the researchers. The Artificial Intelligence Risk Management Framework developed by the US National Institute of Standards and Technology outlines principles for developing transparent and trustworthy AI systems, including mitigating harmful biases that can exacerbate inequities if not addressed during training of AI models. Another such framework is the National Academy of Medicine’s Artificial Intelligence Code of Conduct, which further supports the ethical use of AI in healthcare to ensure that AI technologies are applied safely and fairly to improve patient outcomes.

As AI models become more complex, their black box nature poses challenges in understanding how they make predictions, which is crucial for trust in healthcare applications. To address this, the researchers recommended incorporating explainable AI (XAI), which seeks to make these models more transparent, providing explanations on why and how decisions are made. AI is key in helping users to understand both the impact the AI’s creators intended for the model and possibilities for biases within the model. An example is a class activation map, which shows the regions of an image most significant to the predictions made by a type of deep learning algorithm called a convolutional neural network. By incorporating explainability into AI, oral healthcare can move towards more personalised, equitable care, reinforcing trust among providers and patients.

Recent government initiatives, such as the US Artificial Intelligence Safety Institute Consortium and the National Institutes of Health’s Bridge to Artificial Intelligence programme, have been launched to support responsible AI use by promoting ethically sourced and accessible data set creation. However, a major challenge to responsible AI use in oral healthcare is the lack of universally adopted regulatory and ethical frameworks. The responsible implementation of AI requires collaborative efforts across societal, infrastructural and regulatory domains. By integrating AI responsibly, stakeholders can break down barriers and promote comprehensive patient care globally, potentially reducing health disparities and improving equity in oral healthcare.

The study considered bias in its full scope, referring to any systematic preference, prejudice or unfair treatment based on race, ethnicity, socio-economic status, sex and sexual orientation, among others. Bias could include data that does not fairly represent or serve various ethnicities, language backgrounds, socio-economic situations or other demographics that are already underserved. The researchers pointed out that the majority of applications in oral healthcare that utilise AI rely on data from merely seven countries, exacerbating the potential for bias. The AI Risk Management Framework extends consideration of bias to systemic, computational, statistical and human cognitive biases.

Addressing oral healthcare inequities requires more than making dental care available to underserved communities. It involves addressing the utilisation of, provision of and access to services, and that access is based on financial affordability, physical accessibility and acceptability.

The review, titled “Responsible artificial intelligence for addressing equity in oral healthcare”, was published online on 18 July 2024 in Frontiers in Oral Health.

Topics:
Tags:
To post a reply please login or register
advertisement
advertisement