A team of Israeli researchers has discovered that modern artificial intelligence systems are capable of evaluating real people and developing their own type of "trust," albeit in a different way from humans, according to a statement published by the Hebrew University of Jerusalem and cited by Xinhua. The findings appear in a recent study published in the scientific journal Proceedings of the Royal Society A and raise new questions about how AI algorithms make decisions in sensitive areas such as hiring, granting loans or medical services.
• Over 43,000 decisions analyzed in the research
In the research, specialists evaluated how AI systems analyze and classify individuals in contexts such as: granting bank loans; recruiting and selecting personnel; formulating automatic recommendations. The study analyzed more than 43,000 simulated decisions, then compared them with the responses of about 1,000 human participants.
• AI favors people perceived as competent and honest
The results showed that both humans and artificial intelligence systems tend to favor people perceived as: competent; honest; well-intentioned. According to the researchers, this suggests that AI manages to reproduce some of the fundamental mechanisms of how people build trusting relationships.
• Important differences between human and algorithmic evaluation
However, the study highlights that there are major differences between how humans and AI form these assessments. While humans tend to build overall impressions based on a complex set of traits and perceptions, AI systems: separate individual traits into distinct categories; independently analyze factors such as competence, integrity or intentions; apply more rigid and less nuanced assessments.
Thus, if humans tend to "feel" holistically whether they can trust a person, AI works more through a mathematical sum of predetermined criteria.
• Algorithmic biases stronger than human ones
One of the most important aspects reported by the researchers is the fact that artificial intelligence can exhibit consistent biases, sometimes even more pronounced than those observed in humans. According to the study, AI can discriminate based on factors such as:
age; gender; religion, even in situations where all other relevant information about the people analyzed is identical.
• Researchers call for more transparency in AI decisions
The authors of the study warn that these findings highlight the need for a better understanding of how AI systems make decisions, especially in the context in which they influence more and more key areas of society. From employment and finance to health and justice, algorithms are playing an increasingly important role, and a lack of transparency in decision-making can generate major risks. The study comes at a time when the use of artificial intelligence is expanding rapidly globally, and governments and international organizations are discussing stricter regulation of AI technologies to prevent: algorithmic discrimination; non-transparent decisions;
systemic errors in the assessment of individuals.















































Reader's Opinion