Gender Bias in Futuristic Technology – AI in Pop Culture
The increasing socialization with female virtual assistants in AI is rapidly diminishing the importance of women as “the gendered woman who responds to demand”.
Biased gender biases around the world are no longer in the news. It is closely linked to the social fabric by stereotypes and traditionally respected norms. Such systems create spaces of division, both socially and numerically. While these voids are clearly visible from a socio-political perspective, they remain latent in emerging futuristic technologies, primarily machine learning with big data. With a huge proportion of the population using online digital services and networks today, we are creating several gigabytes of data every day. This, in turn, powers the AI algorithms to become smarter with improved accuracy and quality results.
Every day we are starting to rely more on algorithms globally for decision making, regarding mortgage decisions, insurance risk, job screening, appraisals, setting bond amounts, sentencing recommendations, predictive policing, among many others. More so, it now facilitates the majority of everyday experiences. It is estimated that AI will contribute approximately $15.7 trillion to the global economy by 2030, of which approximately $6.6 trillion will be generated from increased productivity while the remaining $9.1 trillion will come from side effects of consumption.
As exciting as this possibility may seem, our apprehensions must also remain intact, because this technology knows nothing about the socio-economic inequalities of our world. And as this begins to impact the lives of individuals, it becomes an ethical necessity to examine what is responsible and who is responsible.
We have gradually moved from assisted or augmented intelligence, which helped humans in their tasks, to more automated and autonomous systems, completely eliminating the need for human supervision. As exciting as this possibility may seem, our apprehensions must also remain intact, because this technology knows nothing about the socio-economic inequalities of our world. And as this begins to impact the lives of individuals, it becomes an ethical necessity to examine what is responsible and who is responsible. Historical theories on gender-technology relations claim that they are co-product, something that will be discussed later in this article.
Ethical AI insists on placing an individual’s importance and rights on the broader utility spectrum of any digital product. Individual definitions of fairness largely govern the ethics and governance of the field. Or reasonably, according to the definitions of the creators and powerful managers. Therefore, transparency and accountability become essential to its function. With growing confidence and preference for futuristic technologies, these values need to be more rigorously embedded and regulated in audit frameworks. The decision-making process should always be clear and undisguised with a universal sense of comprehensibility.
AI applications and gender bias
A nuanced sociological exploration tells us how AI apps spread gender bias among their users. It’s certainly no coincidence that all AI assistants have always been female voices by default, take Microsoft’s Cortana, Apple’s Siri, Google Assistant and now Amazon’s Alexa. Late 20th century technology researchers attributed this growing stereotype to some studies (which were later refuted) claiming that female voices were more intelligible due to the high pitch. It even eventually led to the creation of an entire female-dominated industry of telemarketers and telephone operators.
Now, this trend is substantiated by studies indicating that audiences respond better to a woman’s voice, describing them as “Friendly and pleasant” while corresponding better to “the image of a devoted assistant”. Similarly, in 2016, when Google tried to launch its new assistant for both male and female voices, it couldn’t because there were no male voice training tools. It was explained that all precursor speech synthesis systems had been qualified only on female voices and therefore performed better than with them.
Subject matter expert Professor Safiya Noble has repeatedly cited that increasing socialization with virtual assistants is rapidly diminishing the importance of women to “a gendered world”. female who responds to the request. Moreover, when we ask the young assistant symbolically encoded in the system to perform limited functions like booking plane tickets, setting reminders, creating monthly calendars, weather reports, etc., she learns these tasks as primary tasks. Therefore, his widespread approval only grows for such missions. Something rather amusing is that one of the fastest supercomputers in the world, IBM Watson, is used to make complex medical decisions and play trivia games instead of sounding alarms, that too in the voice of the l acclaimed “male” voice artist, Jeff Woodman.
Speaking of media portrayals of these stereotypes, JARVIS, Tony Stark’s popular AI assistant in Marvel’s avengers and Iron Man movies (2008-2013) looks like a companion, instead of an automated serving. JARVIS helps him save the world and actor Paul Bettany voices him. On the other hand, Samantha, an AI assistant, voiced by Scarlett Johansson in the film, Her (2013), talks about relationships and plan dates for the protagonist played by Joaquin Phoenix. A classic stereotype of the role of provider/protector and carer!
After the heteronormative costume, the humanoid female robot, Sophia also, declared the desire to have a baby and a family, about a month after his interaction with the real world. She was even granted more freedom and rights as a Saudi citizen than a human woman. It’s strange how a female bot can have more independence than women and foreign workers in a country and yet be conditioned to conform so quickly to generations-old notions of a heterosexual family unit.
Both of these biases result mainly from the same cause, which is the gender bias of the human mind, historically reproduced, recorded and embedded in us. And while this article broadly examines two ways this bias is perpetuated, there may be several upsides to this question. But more importantly, these two representations of bias revolve in a vicious circle where each produces the other. Real-world bias seeps into data bias (evident through algorithmic biases), which in turn is put into action by critical decisions in business, medicine, law and order, etc., thus generating deformed tendencies of the same biases to begin with. Thus, the fact that 92.9% of the secretaries and administrative assistants of the United States were women in 2020, with 83.8% of them being white, is hardly a surprise.
A study conducted by PwC in 2018 on the perception of AI and its tools revealed that nearly 61% of its respondents consisting of employees working in metropolitan cities in India used these digital assistants favorably, noting that they helped with “event reminders” and “calendar management”. Interestingly, 74% of respondents wanted their digital assistants to be “friendly”.
Alan Winfield, a robot ethicist, says designing a robot with a gender is a deceptive act. He points out that the machines themselves cannot therefore belong to a particular genus. Therefore, when designed to shyly reject or submit to harassment, it sets up real women for further objectification. While capable of bringing potential shifts to overused expectations of gender groups, technologies even today seem to entertain women and bots as “near-human beings with no minds of their own.”
Whether it’s voice assistants, chatbots, or virtual agents, by sexing an inanimate product, technology as a discipline makes way for the recycling of expected social behaviors into a very modernist face of neutrality. Alain Winfield, a robot ethicist, says designing a robot with a gender is a deceptive act. He points out that the machines themselves cannot therefore belong to a particular genus. Therefore, when designed to shyly reject or submit to harassment, it sets up real women for further objectification. While capable of bringing potential shifts to overused expectations of gender groups, technologies even today seem to entertain women and bots as “near-human beings with no minds of their own.” A moving UNESCO and EQUALS Reporting, brought to light a pathetic array of voice assistant responses to verbal sexual harassment. The following table shows that the feminized bot even thanks users for sexually inappropriate comments, trivialize the impact of chat calls and verbal abuse that women face on a daily basis. When these female AVs are presented to consumers as subordinate objects at their disposal, technologists encourage the perception of women as “objects” to society at large (see chart below). This, in turn, contributes to the continued trivialized presence of women in agency and decision-making.
Chart: Voice Assistant (VA) Responses to Verbal Sexual Harassment
At the very foundation of AI systems are algorithms, written and trained primarily by English-speaking, white, privileged men. While biases of all kinds are nearly invisible in the modern technological infrastructure, women fall ambiguously between not being entirely a minority group and neither a privileged group. This makes gender bias one of the biggest globally recognized threats.
This comment originally appeared in Feminism in India.
ORF research and analysis now available on Telegram! Click here to access our curated content – blogs, long forms and interviews.