on the erosion of meaning & data as water
For almost a decade now, we’ve been ramping up ordering at AliExpress, Temu, SheIn and some dropshippers. Customers who order are fully aware it might never arrive, be broken, break soon or the product image was a lie and instead, you get a messed-up looking knockoff. It's normalized to order with this risk in mind and being pleasantly surprised when you receive something good (or something at all). I think there were times this wouldn’t fly, but now this is a standard.
This acceptance of “good enough” - I see it elsewhere, too. When I have to look at what the big social media platforms have to offer, what I currently see is a big amount of intentional low quality that tends to drown out the gems.
As Dan Sinker already talked about, we are in the Who Cares Era. Media created online is "content"1 where quality is increasingly irrelevant, which makes craftsmanship irrelevant as well. The question is not whether you can do it well, the question is how much and how fast you can do it.
When previously, titles and thumbnails seemed to be understood as a little science by the people invested in making them, they are now whatever, as the ways to consume content shifts:
- The video will likely not be sought out and picked by a human judging the thumbnail, but suggested, so it is created to appease an opaque algorithm, not human eyes.
- It is made for a feed that people flick through, remaining on the video for a couple seconds most likely, so the content reflects that.
- The most watched content is aware that everything now is a trend for a maximum of a week, likely less.
- And as even Netflix noted and is adjusting their service and shows around, video is often background noise now as people scroll or do something else on the side.
For these viewing habits, it doesn’t matter that in the thumbnail, bad AI art makes the human in the image have 7 fingers or that their hand morphs into someone’s shoulder, or that the subtitles the content creator included in the video have a lot of typos and wrong acronyms. It doesn’t matter that something is badly researched or mispronounced - it’s for the moment, and it matters only that it is released in time.
To save time and be the fastest, it’s okay to drop quality, created with impending irrelevance in mind once the trend or topic dies later that week and the thought that no one will watch this anyway anymore in a few months. Same with posts on a feed, as it only ever shows the most recent stuff and almost no one will go back to read your entire backlog of posts (it is hard enough to do that on most apps, anyway). The algorithm will rarely, if ever, resurface years old posts. All in all, on the big sites, things are created to be forgotten.
All that is an erosion of meaning in what we put out there. Intent and craftsmanship get lost as attention to detail is foregone to prioritize frequency of output and cutting corners to be the first to post a take, do more outrageous things or cover a situation.
I wonder if it’s a self-feeding cycle, as consuming bad content the creator didn’t bother to do well is dissatisfying and not worth paying much attention to, but the creators realize the viewers don’t care this deeply and they could get away with cutting details they feel no one pays attention to. They are especially demotivated now as AI content farms rake in the money while they themselves cannot keep up with that posting schedule through traditional means.
In my opinion, the flood of low effort, half broken and fragile products made to break, the increasing experience of how nothing works anymore, as well as the commercial slop online I detailed above has paved some of the way to being fine using AI that only works correctly some of the time. I obviously cannot prove it, but I wonder if we would have been so accepting of LLMs' hallucinations and repeated failure in what they're advertised for just 15 years back (if it were possible back then). I feel like the expectations for products and the relationship to them was a slightly different one.
In general, many are unwilling to judge the tech sector (not just social media or AI, but also phone providers, entire ecosystems and hosters) like any other essential service providers like utilities, despite giving them an equally important role in our life as electricity, water or internet. You might laugh at that comparison, but many people’s private lives and work now depend on internet connection and computers, and that logically includes 2FA apps, digital wallets, online banking, email, drive, messengers and calendar. Their entire life is on and organized around ecosystems from companies like Meta, Apple or Google, and getting kicked out is insanely hard on them, with a lot less recourse than you’d get against other providers of essential services. Their automatic system banned you? Good luck, you will never even talk to a human in customer service. Your data portability is also affected as they got rid of it entirely.
There is a reliability, uptime, professionalism and option for recourse you expect from services with this much importance that tech companies have largely managed to avoid, and is only getting emboldened by this pervasive “Good enough culture” and outsourcing to chatbots and other automated tools.
It hasn’t clicked for people yet that at this point, data and its protection has become as important of a good as water.
I don’t mean this like we need data like water to survive, that would be silly. I mean: Just as we need regulation to not poison water to protect us and our environment, we need protections so that data isn’t collected and compiled against our will and used to discriminate against us. Just as we need reliable water access and behavior from water providers so water isn't randomly shut off for you based on a database error, we need to make sure that our access to our data isn’t cut off with no recourse or appeal. Not having these has different but devastating outcomes nonetheless.
But still, we read news article after news article of another irresponsible and harmful output by an LLM like Grok, about harmful and wrong AI generated books misleading people, AI facial recognition, about data selling, data breaches and DMA & GDPR-noncompliance, and there is no understanding that this is akin to electricity and water providers severely messing up and, where fitting, violating law that is there for good reason. We have gotten too comfortable with having no say in how our data is used and how tech services are provided to us, when we should actually demand more accountability and consumer protection of tech providers.
quick intermission: why data is important
Millions of texts must have already been written countering “I have nothing to hide” and other rhetoric about how you don't care about what these companies do with your data, but I’ll briefly write about it nonetheless: Not every breach of law is unethical, and not every law is just. If you believe you have nothing to hide because you are not a criminal, you operate under the assumption that things will stay as they are and they’re never going to criminalize something you do or are, when that is not guaranteed.
When you think of having nothing to hide, you think of not having murdered someone. But when I think about protecting your privacy, I think about the things you share with your tech that are still legal but could be criminalized in the future as they were in the past. When you think about not shoplifting so you should be good, I think about your family member or friend or coworker that had to get access to lifesaving medication via illegal ways because it was unavailable or unaffordable. I think about your past previously legal abortion that could come out and ruin your current career or access to goods and services2. I think about the online sex work many engage in that can be grounds to terminate important payment options.
It’s never been easier to find incriminating things about you that weren’t incriminating when you did them. It’s also never been easier to get enough information about you to adjust pricing and wages against your interests, or be enough to close your Google account and leave you stranded. Just food for thought. The logical conclusion to these problems is more data protection and more control over your access to essential online services.
I know water, electricity and other basic goods are not as safe and reliable in many parts of the world, but it is exactly the privileged countries that pioneer AI that are used to not having to wrestle their providers every other week and would not let it slide if the internet they pay for doesn't work as expected or can be randomly taken away from them. We have the privilege of some legal protections and often the ability to go to a competitor when it comes to utilities, but tech is so concentrated among very few companies that this is not feasible. That makes the need for accountability and competition even more urgent.
They want us to adopt and incorporate AI into most of our lives and our work, but realistically, where is the expected reliance for this akin to a utility? If some workers are really replaced with AI or are expected to offload most work and knowledge to AI (and are hired with this in mind), what happens when it’s down, became unusable, or the subscription ran out, there’s an automatic ban or hold? How about about Replit ignoring code freezes and deleting the entire production database? The insecurity of use right now is excused, while we’d be pretty pissed if this same thing was happening with any other basic goods, and we are rightfully pissed that that is the state of search engines or public transport in many areas, so why give this a pass?
Reply via email
Published 20 Jul, 2025
My feelings on that word here, but in this case, it is apt.↩
Usually law is applied with the principle that you cannot be found guilty if you did it before the law was enacted, but I wouldn’t count on that either. If they wanna get rid if undesireable people, they’ll find ways to do so; “it wasn’t illegal when I did it” will not save you from an authoritarian regime, and tech's biggest investors want one.↩