Policy & Advocacy: Oppose Medicine by Algorithm!
Wednesday, March 25, 2020
Late last year, the FDA released a draft guidance document that will loosen the reigns on some types of medical software. This development is part of a larger trend in which artificial intelligence (AI) and other technologies take on a more central role in the doctor/patient relationship. It is “evidence-based” medicine, as determined by government regulators.
AI supported by machine learning algorithms is already being integrated in the practice of oncology. The most common application of this technology is the recognition of potentially cancerous lesions in radiology images. But we are rapidly moving beyond this to where AI is used to make clinical decisions. IBM has developed Watson for Oncology, a program that uses patient data and national treatment guidelines to guide cancer management. Google and other tech giants are greedy for our health data so they can develop more of these tools.
Technology should absolutely be harnessed to improve medicine and clinical outcomes, but AI cannot replace the doctor/patient relationship. There are obvious ethical questions. First is the lack of transparency. The algorithms, particularly the "deep learning" algorithms currently being used to analyze medical images, are impossible to interpret or explain. As patients, we have a right to know why or how a decision about our health is made; when that decision is made by an algorithm, we are deprived of that right. Further, when a mistake is made, who is accountable? The algorithm, or the doctor? How do we hold an algorithm accountable?
Learn more about this issue and write to The Food and Drug Administration, telling them to
ensure that the practice of medicine does not involve using AI and machine learning algorithms to make clinical decisions.