Faces created by AI are more reliable than the real ones | science and technology
Artificial intelligence (AI) is now able to create plausible images of people, thanks to sites such as This person does not go out, as well as non-existent animals, and even rooms for rent.
The similarity between simulated images and reality is such that it has been pointed out by research published in Proceedings of the National Academy of Sciences of the United States of America (PNAS), which compares real faces with those created by an AI algorithm. known as generative adversarial networks (GANs).
Not only does the research conclude that it was difficult to distinguish between the two, but it also shows that fictional faces generate more trust than real ones. Although the difference was slight at just 7.7%, the faces rated as the least trusted were real while three of the four rated as the most trusted were fictional. Sophie Nightingale, professor of psychology at Lancaster University and co-author of the study, said the result was not what the team expected: “We were quite surprised,” says- she.
The research was based on three experiments. In the first experiment, the 315 participants who were asked to distinguish between real and synthesized faces achieved an accuracy rate of 48.2%. In the second experiment, the 219 different participants were given hints to distinguish synthetic faces from real ones with feedback on their guesses. The percentage of correct answers in the second experiment was slightly higher than in the first, with an accuracy rate of 59%. In both cases, the most difficult faces to classify were those of white people. Nightingale believes this discrepancy is because algorithms are more used to this type of face, but that will change over time.
In the third experiment, the researchers wanted to go a step further and assess the level of trust generated by the faces and test whether the synthetic faces triggered the same levels of trust. To do this, 223 participants had to rate the faces from 1 (very unreliable) to 7 (very reliable). In the case of real faces, the average score was 4.48 against 4.82 for synthetic faces. Although the difference is only 7.7%, the authors stress that it is “significant”. Of the 4 most trustworthy faces, three were synthetic, while the four that seemed the least trustworthy to participants were real.
All the faces analyzed belonged to a sample of 800 images, half real and half fake. The synthetic half was created by GANs. The sample was made up of equal numbers of men and women of four different races: African American, Caucasian, East Asian and South Asian. They matched the real faces with a synthetic face of a similar age, gender, race, and general appearance. Each participant analyzed 128 faces.
José Miguel Fernández Dols, professor of social psychology at the Autonomous University of Madrid whose research focuses on facial expression, points out that not all faces have the same expression or posture and that this can “affect judgments”. The study takes into account the importance of facial expression and assumes that a smiling face is more likely to be deemed more trustworthy. However, 65.5% of real faces and 58.8% of synthetic faces smiled, so facial expression alone “cannot explain why synthetics are considered more reliable”.
The researcher also considers that the posture of three of the images deemed less reliable is critical; pushing the upper part of the face forward in relation to the mouth and protecting the neck is a posture that often precedes aggression. “Synthetic faces become more realistic and can easily generate more confidence by playing with several factors: the typical nature of the face, the features and the posture”, explains Fernández Dols.
The consequences of synthetic content
In addition to the creation of synthetic faces, Nightingale predicts that other types of artificially created content, such as videos and audios, are “on the way” to becoming indistinguishable from real content. “It is the democratization of access to this powerful technology that poses the greatest threat,” she says. “We also encourage reconsideration of the often laissez-faire approach to the public and the unlimited release of code for anyone to embed into any application.”
To prevent the proliferation of non-consensual intimate images, fraud and disinformation campaigns, researchers offer guidelines for the creation and distribution of computer-generated images to protect the public from “deep fakes”.
But Sergio Escalera, a professor at the University of Barcelona and a member of the Computer Vision Center, points to the positive aspect of AI-generated faces: “It’s interesting to see how faces can be generated to convey friendly emotion”, he said, suggesting this. could be incorporated into the creation of virtual assistants or used when a specific expression needs to be conveyed, such as calm, to people suffering from mental illness.
According to Escalera, from an ethical point of view, it is important to expose the potential of AI and, above all, “to be very aware of the possible risks that may exist during transmission to society”. He also points out that the current legislation is “a bit behind” technological progress and that “a lot remains to be done”.
Exclusive content for subscribers
Read without limits