As the use of artificial intelligence (AI) in healthcare applications grows, healthcare providers are looking for ways to improve the patient experience with their machine doctors.
Researchers at Penn State and the University of California at Santa Barbara (UCSB) found that people may be less likely to take health advice from an AI doctor when the robot knows their name and medical history. . On the other hand, patients want to be on a first name basis with their human doctors.
When the AI ââdoctor used patients’ first names and referred to their medical history in the conversation, study participants were more likely to view an AI health chatbot as intrusive and also less likely to heed AI medical advice, the researchers added. However, they expected human doctors to differentiate them from other patients and were less likely to comply when a human doctor couldn’t remember their information.
The findings offer further evidence that machines walk a fine line as physicians, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects at Donald P. Bellisario College of Communications and Co-Director of the Research Labs. media effects at Penn State.
âMachines don’t have the capacity to feel and experience, so when they ask patients what they’re feeling, it’s really just data for them,â said Sundar, who is also affiliated with the Penn State Institute of Computer Science and Data (ICDS). “Maybe that’s a reason people in the past have resisted medical AI.”
Machines have advantages as medical providers, said Joseph B. Walther, distinguished professor of communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB. He said that like a family doctor who has treated a patient for a long time, computer systems could – hypothetically – know a patient’s complete medical history. In comparison, seeing a new doctor or specialist who only knows about your latest lab tests might be a more common experience, said Walther, who is also director of the Center for Information Technology and Society at UCSB.
“It hit us with the question, ‘Who really knows us best: a machine that can store all of this information, or a human who has never met us before or has not developed a relationship with us, and who ‘What do we value in a relationship with a medical expert? “Walther said.” So this research asks, who knows us best – and who do we love most? “
The team designed five chatbots for the two-phase study, recruiting a total of 295 participants for the first phase, of which 223 returned for the second phase. In the first part of the study, participants were randomly assigned to interact with a human doctor, an AI doctor, or an AI-assisted doctor.
In the second phase of the study, participants were assigned to interact with the same doctor again. However, when the doctor entered into the conversation during this phase, he either identified the participant by first name and recalled the information from the last interaction, or again asked how the patient preferred to be approached and repeated questions about his medical background.
In both phases, the chatbots were programmed to ask eight questions regarding the symptoms and behaviors of COVID-19, and offer a diagnosis and recommendations, said Jin Chen, PhD student in mass communication, Penn State and first author of the article.
âWe chose to focus on COVID-19 because it was a major health issue during the study period,â said Jin Chen.
Accept AI physicians
As medical providers seek cost-effective ways to provide better care, AI medical services may offer an alternative. However, AI physicians must provide care and guidance that patients are willing to accept, according to Cheng Chen, a PhD student in mass communication at Penn State.
âOne of the reasons we did this study is that we read a lot of stories in the literature about people’s reluctance to accept AI as a doctor,â Chen said. âThey just don’t feel comfortable with the technology and they don’t think the AI âârecognizes their uniqueness as a patient. So we thought that, because machines can hold so much information about a person, they can provide individualization and solve this uniqueness problem. “
The results suggest that this strategy may backfire. âWhen an AI system recognizes a person’s uniqueness, it comes across as intrusive, echoing broader concerns about AI in society,â Sundar said.
In a puzzling finding, about 78% of participants in the experimental condition featuring a human doctor believed they were interacting with an AI doctor, the researchers said. Sundar added that a working explanation for the finding is that people may have gotten used to online health platforms during the pandemic and can expect richer interaction.
In the future, researchers expect more investigation into the roles that authenticity and the ability of machines to go back and forth can play in developing better relationships with patients.
The researchers presented their findings today at the ACM CHI 2021 Virtual Conference on Human Factors in Computing Systems – the largest international conference for human-machine interaction research.