Select your country
13 April 2021
In 2018, Brazilian bank Bradesco introduced a virtual assistant named BIA — short for Bradesco Inteligência Artificial — who presented as female and helpfully assisted customers with their financial queries. It wasn't long before BIA started being verbally assaulted and harassed, submitted to the same kind of abuse real women have to deal with every day. In 2020 alone, BIA received around 95,000 messages that could be categorized as sexual harassment. Not only did men feel entitled to swear at BIA, they used humiliating language and even threats of rape.
Like most other chatbots, voice assistants and virtual agents, BIA was programmed to respond to abuse in a subservient or passive way, attempting to be polite and helpful even when faced with vile speech: "I'm sorry, I don't understand," "I'll try to do better," "I don't know how to respond to that." Apple's Siri reacted with "I'd blush if I could," which was used as the title for a 2019 study by UNESCO that came to the conclusion that "consumer technologies generated by male-dominated teams and companies often reflect troubling gender biases."
Guided in part by that UNESCO paper and by the Hey Update My Voice campaign, Bradesco decided to change BIA's attitude and let her talk back. Now, BIA no longer attempts to remain friendly at all costs. When customers insult or attempt to demean her, she responds with a "don't talk to me like that," demands respect and will even cite articles of criminal law.
No, virtual agents aren't real people and don't have feelings. But disrespect towards them is reflective of stereotypes and gender violence faced by real women. If brands unwittingly create spaces in which verbal abuse and sexual harassment go uncontested, that behavior is deemed tolerable and even accepted. If you haven't already done so, time to check your bot's scripts?
Related: Bogotá hotline aims to dial down violence by getting men to open up.
Select your country