Imagine you are shopping online and come across a live chat feature on a website. You decide to begin a chat to get some advice on finding the right product to purchase. The representative on the other end appears to be eager and helpful, prompting you for more information. You feel a sense of trust and end up sharing some details about your lifestyle and background. The representative helps to find the product that best fits your needs and all seems well - or is it?
What if you learn that the representative you were chatting with was not a human, but a bot all along? And what if all the personal information you’ve shared with it was saved beyond the session of your chat? This data, now owned by the company that runs the business, can then be used to send you unsolicited advertisements and target you as a consumer… that wouldn’t seem too fair!
This scenario is just one example of how ethical issues can arise when it comes to chatbots. In the rest of this article, we will further expand on this topic and see how to address these ethical concerns.
From the user’s perspective, a big ethical consideration when it comes to technology is transparency. In other words, is the user aware of all aspects involving the chatbot and the consequences of interacting with one? As seen previously, a common concern is the privacy and protection of user data. Depending on what regulations are put in place, any information that the users share with the bot during their conversation could potentially be collected, used, or sold without their consent – not to mention the company or organization that owns the bot would then amass an increasing amount of information over time, creating a vast power imbalance. Thus, it is key to be open and clear about data usage, ownership, and protection. One way to maximize transparency includes implementing a data regulation system like the European Union’s GDPR, which gives individuals more control over their personal data.
To take it a step further, full transparency may also involve explicitly communicating to the user that they are indeed chatting with a bot. As chatbots become more advanced and realistic, it might not always be immediately apparent whether the user is chatting with a bot or another human! A prime example is the Google Duplex system, which is able to carry out convincingly natural, human-like phone conversations for specific tasks like booking appointments. While adding this aspect of realness to the bot does help contribute to the ease and flow of the conversation, it is still important to ensure that the user is fully aware of the situation and does not feel deceived, as this can lead to distrust.
There are also ethical issues to consider on the side of the chatbot. With respect to the representation of the bot, one of the biggest controversies surrounds the assignment of gender. Historically, females have been expected to fill assistant-type roles in the workplace while males take on leadership positions. It is not unrelated that chatbots have been disproportionately given female names or voices, such as Apple’s Siri and Amazon’s Alexa, which can reinforce gender roles and perpetuate the “subservient female” stereotype. As chatbot developers, we need to be careful to avoid any gender bias during the design of the bot.
Additionally, we need to take caution when training the bot to ensure that it behaves appropriately. If it is not properly trained, the chatbot could be at risk of displaying racism, sexism, or use of abusive language. This is exactly what happened to Microsoft’s Tay, a bot the company created for use on Twitter that generated its responses based on how users interacted with it. When various users began posting offensive tweets towards the bot, Tay reciprocated by emulating that same language in its replies. This type of behavior can be prevented with more effective training of the bot, such as using supervised learning to ensure the quality of training data and better predict the outputted responses.
Art of communication
As we can see, there is more to communication than simply providing a response. In society, we consider certain behaviors more morally or socially acceptable than others. In the case of handling user abuse, is it enough for the chatbot to simply not reciprocate the negative language? Passively accepting the abuse may actually encourage user behavior and downplay the significance of the situation. For example, feminized chatbots are often sexually harassed without any apparent repercussions. As chatbot developers, we can design bots that actively tackle harassment, perhaps using humor and wit to turn the situation around.
Similarly, some users reach out to chatbots as a source of company and comfort when they are feeling lonely or depressed. In these situations, users may grow emotionally attached, and the chatbot should be particularly considerate of their feelings. For instance, how can a chatbot demonstrate compassion and empathy towards the user? If the user is expressing suicidal thoughts, would the bot be able to offer help? These are all ethical questions that should be considered. Some chatbots, like Woebot, are specially trained to be able to help users with their mental health.
Future of chatbots
Finally, as chatbots become an ever-growing part of the human world, it is important to consider how they will affect the future and life as we know it. Already, chatbots are increasingly filling roles that were once occupied by humans. With chatbot technology rapidly improving, more and more jobs will likely become automated, displacing a significant number of workers. While this trend may ultimately be inevitable, we can still be mindful of its consequences and take action to allow for a smoother transition.
Nevertheless, there is no denying that chatbots bring many positive elements into our lives. They hold great potential, and it is an exciting journey to be a part of!