Can Artificial Intelligence Help Make Moral Medical Decisions?
Before COVID-19 forced doctors to decide who gets the ventilator, Professors Walter Sinnott-Armstrong, Jana Schaich Borg, and Vincent Conitzer were asking, Who gets the kidney?
Since 2015, Sinnott-Armstrong, Schaich Borg, who co-direct the Kenan Institute for Ethics’ MADLAB, have worked with Conitzer and postdoc Lok Chan along with students and other colleagues at Duke and the University of Maryland to investigate moral attitudes surrounding kidney transplants, where supply rarely meets demand. Currently, decisions about who receives a kidney are based on medical compatibility, age, health, organ quality, and time on the waiting list. However, the research team found that the public generally thinks other factors should also be considered, such as number of dependents and unhealthy behaviors causing the kidney disease.
Should public beliefs influence how kidneys are allocated? The research team argues that these opinions should matter both because medical experts can learn from the public and because public hospitals are paid for with public funds. Therefore, the general public has a stake –and should have a say—in the decisions.
As illuminated by the pandemic, surgeons often must make decisions quickly, and those decisions can be affected by ignorance, emotion, bias. If a donor is killed in an auto accident, for example, the surgeon might need to decide who should receive the transplant without having much time to review all cases thoroughly. At times like these, a clinician’s moral judgment might not represent the values of the hospital or the public.
Sinnott-Armstrong and colleagues believe that computer technology in the form of Moral Artificial Intelligence (AI) might lead to fairer decision-making and reduce bias and its effects on racial and economic injustice in decisions about scarce medical resources. They recently received funding from the University for a collaboratory to expand their work in Moral AI with colleagues at Duke and Duke Kunshan by researching moral judgments in these sorts of medical situations. The collaboratory aims to develop Moral AI that could be used to help doctors and hospital personnel make better judgments about who receives scarce resources.
While acknowledging the possibility for AI to be used for good or harm, the team believes that there is the potential to harness the technology to reduce the kinds of bias that keep human decisions from aligning with fundamental social values. Surveying moral attitudes is the first step toward programming AI that reflects these values.
Sinnott-Armstrong’s team quickly realized that their studies also applied to the fair distribution of scarce ventilators and vaccines in hospitals and communities overwhelmed by Covid-19. They received grants from Oxford University and the World Health Organization, and collaborated with researchers at Oxford to investigate these questions in the midst of the pandemic. After concluding and analyzing their research on ventilators and vaccines, they plan to assess how well their methods work across medical contexts.
Additionally, the team plans to use the collaboratory to pursue new cross-cultural studies beyond the US and the UK. Through a partnership with colleagues Daniel Lim and Daniel Weissglass from Duke Kunshan, they will be able to survey moral attitudes in China. They are also pursuing partnerships in Europe and South America. They will first determine individual models, preferences, and moral judgments within diverse cultures, and then work toward understanding how divergent views within a society should be aggregated to inform social policies and regulations.
Although the team believes AI has the potential to produce decisions that are fairer and more in line with public desire, they realize that public resistance is a barrier to producing this tool. Additional research addresses this issue by investigating what kinds of presentations increase social acceptance and enthusiasm for this tool.
By collaborating across disciplines such as philosophy and computer science, this research team hopes to decrease injustice and better represent the moral values of society in emerging technologies, while helping their colleagues in medicine produce more ethical outcomes.