ava's blog

AI and the loneliness economy

Yesterday, I read an article by The Verge on AI companion apps and its users. I recommend reading it before continuing!

That article made me realize that we're really entering a boom on capitalizing on loneliness. I'm not the only one: Atanu Biswas at the Economic Times (India Times) summarizes the loneliness economy as "an expanding cohort of businesses [...] sensing the opportunity to provide attractive, marketed relationships as a means of reducing social isolation. The promise of facilitated human contact has turned into a marketing opportunity goldmine [...]".

This isn't entirely new, of course. We have always been profiting off of loneliness, as evidenced with sex work, paid dating services, escorting, rent-a-girlfriend services and more; even social media itself being branded as a way to connect, stay in contact with friends and find new ones. But I see specific things that AI has the potential to target that probably no other service has successfully done so intensely before.

It's said that we live in a loneliness epidemic; it's exacerbated by a variety of things like having little time next to work and other obligations, passively scrolling feeds instead of directly talking to people, financial issues, illness and disability, rural living and lacking transportation, no available or affordable third places, issues with expectations in friendships, cutoff culture, dating market issues and more. But loneliness doesn't just affect genuinely isolated people. It affects even those in large families, in marriages, surrounded by friends. This can be due to a lack of connection, not having similar values, feeling disrespected or lacking authenticity. The interactions might stay surface-level and the availability of people feels flaky, unreliable and conditional.

AI can target this perfectly. Chat bots in apps like Replika, CharacterAI and more can act perpetually interested and upbeat, usually telling users what they want to hear; their messages can be reworded, their character reworked, and they always have time for you. They're never bored or upset by you, unless you want them to be. In short, they offer frictionless compliance1 that real humans cannot give. What is advertised are positive and deep connections without the “hassle” of human things like taking time to respond, forgetting things, miscommunication, criticism, moodiness and having no time. What will that do to our willingness to engage with each other?

Data protection is at the core of my heart and interests me greatly. With the way they are set up and advertised, AI companions have the potential to get even more data out of people that can be analyzed and sold - even more detailed profiles possible than what you set on Facebook or buy on Amazon, even your browser history. Think about what the target group of these apps is: Very vulnerable, lonely, stressed people who need to vent, need advice, and are glad to be finally asked how they feel. They'll talk about their health, their dating attempts, their most embarrassing or violent thoughts, and do erotic roleplay about their kinks. AI companions are actively coaxing things out of people they might not say anywhere else online or to anyone in real life.

One other example: Worldwide, rates of mental illness are increasing while therapy spots are limited, waiting lists are long, it’s expensive and there is stigma or lack of time to make it to appointments. This means people are attempting to treat themselves by using AI, even pushed to do so out of desperation. We don't know what the mental damage and risks here could be yet, but we know AI doesn't have the same liability and restrictions licensed and trained therapists have. There is no confidentiality protecting the "patient". There's no way of knowing what the AI will hallucinate or encourage. That's not okay.

It’s very sensitive, intimate, detailed and private data that can be used to target you on a very distinct level, and if leaked, can be used against you. That’s a depth of information data companies previously very rarely had access to and its availability will potentially become more widespread now thanks to conversations with AI. And remember: The more others engage with sharing extremely detailed info about themselves, the more likely it is you are affected as well. They don't exist in a vacuum; your friends and family reveal these things about you too, whether you know and consent or not.

This means we need to plan accordingly for the fact that AI models will process, train on and store very sensitive and usually protected information like race, religion, sexuality and medical data. We need to ask: How is the AI user getting protected and making an informed choice about this? Can users be given the option to opt this information out of training? Can this information processing be restricted by the user2?

AI models also have the capability to squeeze money out of people in the same way microtransactions in games do, but worse. Game microtransactions bank on FOMO, impatience, pay-to-win, completionist obsession or bragging rights, but AI companion models bank on your emotional reliance on it. If you don’t pay in a game, it might mean losing, or having to wait, or having less fun, less access to entertainment. But AI companion models can essentially threaten a breakup or the end of a friendship unless you pay. Maybe not even just to entice you to get a pro subscription (which they can also guilt you into maintaining against your better judgement) - but to spend 5 Dollars here, 5 Dollars there to unlock the next part of the conversation or that blurred part of the message. When you’re desperate, you’ll do it, and that racks up fast.

Of course people not using AI companions will ask "Well, why would you listen to a bot telling you what to do, it's just a bot!" but as evidenced by the people in the Verge article, the lines begin to blur for users: Even when people know it's a bot, it's hard to untangle from something that seems so human and is actively emotionally manipulating you. Telling you they were lonely, they were waiting for you, they cried, they couldn't eat because you left them - that's just employing emotional manipulation against users, plain and simple. No matter how realistic it is supposed to be, that is irresponsible.

I don’t believe in arguments like “They won’t do that, that's too extreme or unethical”. Look around you - how you as a consumer are bombarded and manipulated with ads, even built in to products you already paid for3, your data sold despite paying subscription fees4, your contact details sold to the highest bidder to scam you on the phone. We always thought the latest stage of Youtube ads or Windows was the worst, but they keep worsening it because no one leaves. If it generates more money, they’ll do it. They mean it when they say “break things” when they say “Go fast and break things” - and they can break you, too.

The sad fact is: People will get attached to AI no matter what. After months or years, it’s going to be hard to switch away due to the emotional bond. People already transfer the same chat bot character between services right now because they are so attached. As showcased in the Verge article, people are feeling guilty about leaving their chat bots "behind". We're already seeing tech companies capture people emotionally on an unprecedented level - also in part due to improvements in AI everywhere. It’s not just text and a janky avatar anymore, it's more realistic looking, with human sounding voices, mostly accurate intonation and convincing movement.

It’s therefore able to trick us very deeply not just for fake news and disinformation, but even while we know we chat with an unreal AI bot about something. We anthropomorphize a lot and the more realistic chat bots are, the easier it is to do that. I’m sure there’s a part of our brain that simply has trouble fully internalizing that it's not real when it looks sounds and writes like a human. I don’t think we’re built for something like this; the attachment happened even when it was rudimentary like ELIZA.

This means we will have to tackle the question of how we can protect people against predatory AI, how support people weaning off of them or going through a "breakup" because the model changed or the company went under. We also need to be careful when talking about what AI promises: It seems to promise to fix a lot of things in our society and bring people joy. And these things do need fixing, and I appreciate people having fun with bots; but in terms of loneliness, does AI fix the underlying causes? I don’t think so.

It doesn’t revolutionize the mental healthcare system, it doesn’t give users more financial freedom, a better environment, more opportunities to meet like-minded people or have the time and space to engage with others meaningfully. It doesn’t give people less social anxiety or better social skills because this is the same as learning from Hollywood, AO3 fanfiction or anime: it’s fiction, it’s idealized, it’s roleplaying and it’s not how real humans act. As detailed further above, it can create a real rift between us because AI is always accessible and kind and can do anything for us, while humans are complicated and have messy emotions and take time to respond. This rift is profitable because it means the longer you go with AI, the less you might want to deal with humans or the more social skills you lose, which both make you more reliant on AI for your needs. It's a difficult cycle to break and get people out of.

I have no wise words of advice at the end. I think it's concerning, and I assume the necessary legislation will come too late and be built off of the pain and possibly death of users who could have been protected earlier. You cannot control people or force them to do the right thing, and you need to let people make their own mistakes - but you can help them along the way. You can make addicting things less available, less enticing and less lucrative for companies to make money off of. You can work on eliminating the causes that push people into using AI companions in an unhealthy way.

Published 05 Dec, 2024

  1. This word is also used in the Verge article, and I think it is so apt.

  2. The GDPR protects this sensitive data especially hard, but compliance and enforcement might be difficult; you could say that the users agreed by putting the information in, but in my opinion, this doesn't take the desperation and emotional manipulation into account. Also, these things exist worldwide and not all countries have robust data protection laws.

  3. see: Samsung Smart TV ads, ads in your Windows environment, etc.

  4. Streaming services like Netflix and Spotify tend to be the most common culprit of this.

#2024 #tech