Queering the smart wife could mean, in its simplest form, affording digital assistants different personalities that more accurately represent the many versions of femininity that exist around the world, as opposed to the pleasing, subservient personality that many companies have chosen to adopt.

Q would be a fair case of what queering these devices could look like, Strengers adds, “but that can’t be the only solution.” Another option could be bringing in masculinity in different ways. One example might be Pepper, a humanoid robot developed by Softbank Robotics that is often ascribed he/him pronouns, and is able to recognize faces and basic human emotions. Or Jibo, another robot, introduced back in 2017, that also used masculine pronouns and was marketed as a social robot for the home, though it has since been given a second life as a device focused on health care and education. Given the “gentle and effeminate” masculinity performed by Pepper and Jibo—for instance, the first responds to questions in a polite manner and frequently offers flirtatious looks, and the latter often swiveled whimsically and approached users with an endearing demeanor—Strengers and Kennedy see them as positive steps in the right direction.

Queering digital assistants could also result in creating bot personalities to replace humanized notions of technology. When Eno, the Capital One baking robot launched in 2019, is asked about its gender, it will playfully reply: “I’m binary. I don’t mean I’m both, I mean I’m actually just ones and zeroes. Think of me as a bot.”

Similarly, Kai, an online banking chatbot developed by Kasisto—an organization that builds AI software for online banking—abandons human characteristics altogether. Jacqueline Feldman, the Massachusetts-based writer and UX designer who created Kai, explained that the bot “was designed to be genderless.” Not by assuming a nonbinary identity, as Q does, but rather by assuming a robot-specific identity and using “it” pronouns. “From my perspective as a designer, a bot could be beautifully designed and charming in new ways that are specific to the bot, without it pretending to be human,” she says.

When asked if it was a real person, Kai would say, “A bot is a bot is a bot. Next question, please,” clearly signaling to users that it wasn’t human nor pretending to be. And if asked about gender, it would answer, “As a bot, I’m not a human. But I learn. That’s machine learning.”

A bot identity doesn’t mean Kai takes abuse. A few years ago, Feldman also talked about deliberately designing Kai with an ability to deflect and shut down harassment. For example, if a user repeatedly harassed the bot, Kai would respond with something like “I’m envisioning white sand and a hammock, please try me later!” “I really did my best to give the bot some dignity,” Feldman told the Australian Broadcasting Corporation in 2017.

Still, Feldman believes there’s an ethical imperative for bots to self-identify as bots. “There’s a lack of transparency when companies that design [bots] make it easy for the person interacting with the bot to forget that it’s a bot,” she says, and gendering bots or giving them a human voice makes that much more difficult. Since many consumer experiences with chatbots can be frustrating and so many people would rather speak to a person, Feldman thinks affording bots human qualities could be a case of “over-designing.”

Advantages of local domestic helper.