Artificial intelligence possesses the potential to change medical world.
It can allow health care professionals to quickly and accurately analyze health data, and lead to better identification, treatment, and prevention of a large number of human health issues.
Interventions involving artificial intelligence unified with virtual health care - telemedicine and digital health - play a significant role in encounter with Covid-19.
United States Veterans Health Administration is developing an AI tool to prognosticate Covid-19 consequences such as duration of hospitalization and death, using machine learning to increase lung imaging analyzes to detect pneumonia infection in chest X-rays to recognize patients possibly to have Covid-19 complications.
Its use in health care upraise ethical matters that are paramount and fundamental to evade harming patients, creating liability for providers of health care, and undermining public confidence in these technologies.
Although algorithmic bias is not unique to predictive artificial intelligence, AI tools can amplify those biases and compound existing inequalities in health care.
Most patients are unaware of the extent to which AI-based health care tools are capable of mining and drawing conclusions from health and non-health data, including sources that patients believe to be confidential, such as data from their electronic health records, genomic data, and chemical and environmental exposure information.
The Health Insurance Portability and Accountability Act, which requires patient consent for certain medical information being disclosed, does not apply to commercial entities that are not health care providers or insurers.
The Americans with Disabilities Act does not prohibit discrimination based on future medical problems and no law prohibits decision making based on non-genetic predictive data, such as decisions made using predictive analytics and AI.
Given the increasing adoption of AI technologies by health care systems, data governance structures need to evolve to ensure that ethical principles are applied to all clinical, information technology, education, and research efforts.
A data management framework based on the following 10 steps can help health care systems embrace artificial intelligence applications in ways that reduce patients, providers and payers' ethical risks.
AI developers should exercise reasonable judgment and maintain responsibility for the life cycle of AI algorithms and systems and health care outcomes derived from those AI algorithms and systems through rigorous testing and calibration, patient empathy and a deep understanding of the implications of recommendations derived from these algorithms.
Health care systems should operationalize AI strategy through a Digital Ethics Steering Committee consisting of the Chief Data Officer, Chief Privacy Officer, Chief Information Officer, Chief Risk Officer, and Chief Ethics Officer.
As AI applications in health care evolve, it is important to create a strategic messaging strategy to ensure that patients understand the key advantages and risks of AI in health care and that their health care providers can communicate clearly and coherently.
As AI transforms healthcare, these 10 steps can help healthcare systems develop a governance framework capable of carrying out enterprise-wide AI initiatives in a manner that reduces moral risks to patients, enhances public confidence, affirms health equity and inclusiveness, transforms patient experiences, drives digital health initiatives, and enhances the reliability of AI technologies.
In all health care organizations using artificial intelligence, they should provide solid foundation.
Satish Gattadahalli is Grant Thornton Public Sector's director for digital health and health informatics.
Read the original article "Health care needs Artificial Intelligence Governance - STAT" at https://www.statnews.com/2020/11/03/artificial-intelligence-health-care-ten-steps-to-ethics-based-governance/