Artificial intelligence, despite its many advantages, is not free from biases, and new research suggests that human users may unconsciously absorb these automated biases, affecting their behavior even after they stop using AI programs. This could have profound implications in various fields, including healthcare and law enforcement.
While previous studies have highlighted the potential harm that biased AI can cause to marginalized groups, this research delves into how AI-human interactions influence human decisions. The study, led by Helena Matute, an experimental psychologist at the University of Deusto in Spain, explored how AI-suggested biases could persist in a person’s behavior even after they cease using the AI program.
During three experiments, participants were asked to categorize images as indicative of the presence or absence of a fictional disease. The experiments introduced purposefully skewed AI suggestions that led participants to classify images incorrectly. The results showed that participants who received AI suggestions continued to exhibit the same bias in their future decisions, even after the AI guidance was withdrawn.
While the study has limitations, such as not involving trained medical professionals, it highlights how even time-limited interactions with AI models can have lasting effects, potentially influencing human behavior in detrimental ways.
Experts emphasize the importance of increasing transparency and understanding of AI systems to mitigate the impact of AI biases. Otherwise, there is a risk of creating a self-reinforcing cycle in which biased humans generate increasingly biased algorithms, a trend that may be challenging to reverse.
The study underscores the need for greater awareness and transparency in the development and deployment of AI systems across various sectors.