The radicalizing machine

01.04.2026

Anyone who watches videos on YouTube is familiar with the effect: You’re interested in vegetarian recipes, and after a few clicks you’re shown vegan content, then it’s about fruitarianism, and eventually you end up with the sunlight diet. As soon as the algorithm thinks it has identified a preference, it caters to it specifically and reinforces what works. This causes topics to become increasingly extreme, more and more polarized, and emotionally charged. All of this is done with the goal of keeping users’ attention for as long as possible. 

Since the spectacular jury verdict in the U.S. against Meta and YouTube, many have been asking: What effect will auto-play, endless scrolling, algorithmic amplification, and thematic radicalization have on us in the long run? The jury in Los Angeles found Meta and YouTube negligent and harmful to the mental health of a young plaintiff due to the design of their platforms. If this perspective prevails in future cases, the question arises: Who will determine the rules of digital spaces in the future?

The Co-Evolution of Technology & Law

Let’s take a look at recent German history: The media regulatory framework in this country, with the Interstate Broadcasting Treaty at its core, did not come about by chance. It is a direct response to the totalitarian control of public discourse under National Socialism. With clear rules, supervisory bodies, and a social mandate. The Interstate Media Treaty has been in force since 2020 and now includes so-called telemedia, i.e., websites, streaming services, and platforms. It regulates pluralism of opinion, non-discriminatory accessibility, protection of minors, and transparency.

As good and well-designed as this system is, it is also important to recognize that legal frameworks such as the State Media Treaty apply only in Germany. Platforms like YouTube and TikTok operate globally. YouTube, for example, is based in California, while TikTok has its European headquarters in Ireland. The jurisdictions and applicability of data protection regulations and age verification are complex and vary across different legal jurisdictions and political perspectives. 

Digital spaces are by no means “lawless”; rather, the problem lies in the lack of international coordination and enforcement of common rules for global platforms. If you’re looking for a well-researched, up-to-date discussion on the co-evolution of technology and jurisprudence, this article by Martina Eckardt comes highly recommended. It examines both structural challenges and the evolution of regulation in the context of digital platforms.

From Niche Discourse to Mainstream Issue

Amid all these gray areas, one thing becomes clear: today’s most powerful digital platforms largely operate outside established laws and international agreements. Rules apply here only to a limited extent, if at all. This becomes particularly evident in the use of algorithms - the technological tool at the heart of the recent ruling against Meta and YouTube. Anyone who has ever found themselves trapped in the rabbit hole of social media knows the power of these invisible shackles: You are endlessly driven through feeds and recommendations until your sense of time dissolves and your attention span crumbles.

Now, a broad consensus is increasingly emerging that this level of influence and dependence is no longer acceptable to many. German news outlets like ZDF today are dedicating special programs to the topic - such as “How Social Media Makes You Addicted - and How to Fight Back” - a clear sign that the issue has now entered the mainstream. But when there is a lack of legal enforceability and no unified political solidarity is likely to emerge, the question arises: what can be done in this situation at all?

Who Benefits from our Dependence?

First and foremost, the platforms themselves. Their algorithms maximize engagement—that is, the measurable time spent on the platform. Every additional minute boosts advertising revenue and data collection. But the biggest driver is the advertising and marketing industry, for which every interaction is pure profit. It uses the data to influence us in a targeted way: It controls what content we see, what products we buy, or what political opinions we form. Companies like Acxiom, Experian, or Equifax collect and link vast amounts of consumer data. This often goes far beyond what users consciously disclose. This data is aggregated by specialized entities like Oracle Data Cloud and prepared for resale.

On the other hand, there are the platforms and advertising networks: Through systems like Google Ads or Meta Platforms’ advertising infrastructure, this data is used to accurately predict behavior and influence it in a targeted manner. In addition, there are global advertising conglomerates such as WPP, Publicis Groupe, and Omnicom Group, which strategically leverage these data streams and translate them into campaigns.

This escalation reveals the extreme logic of today’s capitalism: While Karl Marx analyzed the value of human labor, he could hardly have imagined that our attention and time themselves would one day become commodities. We voluntarily give away our attention and pay with our time. The marketing industry rakes in revenue from our clicking behavior and, on top of that, accumulates our collective intelligence in the form of data. What a lucrative deal!

Right-Wing Radicalization

The dynamics I describe in this article are not merely problematic from an economic or health perspective. Algorithms designed to maximize clicks and likes lead us into echo chambers and exacerbate the growing polarization of society. Content that generates anger, fear, or outrage is algorithmically prioritized because it increases engagement. As a result, we are becoming targets of radicalized political messages. Radicalization and manipulation are becoming part of the business model, while the platforms themselves profit from our division. In these spaces, right-wing populist narratives, misogyny, sexualized violence, and hate speech against migrants thrive. The platforms profit directly from this dynamic: every outcry, every call to hate generates engagement and thus money. Our attention is not only monetized but simultaneously exploited to shift the political spectrum to the right.

Back to the Future

Twenty years ago, in the early days of YouTube and Facebook, these platforms were technological and social infrastructures centered on serendipity, creativity, and the community experience. While algorithmic curation existed from the start, it is only in recent years that algorithms have become the dominant guiding principle. With the optimization of dwell time and engagement, platforms have fundamentally changed: from social networks to highly controlled attention-grabbing machines.

I also remember very well how the short-form messaging service Twitter, now X, became my digital home starting around 2008, and how I actually learned everything there about social discourse and exchange. Until Elon Musk came along, Twitter was a place of learning and exchange for many. Discussions arose organically, trends were set solely by the community, and you could still truly observe how ideas and arguments took effect. It is precisely to this moment, to this form of social learning, that I would like to return. Back to the future. Into a space that once again allows for community, creativity, and genuine debate.

What should be done?

A bold claim: We should immediately stop viewing corporate social media as a space for discourse, because genuine discussion is structurally impossible there. The logic of these platforms is not designed for exchange, but for extraction. That does not mean, however, that communication on the internet is doomed to fail. It can certainly work if spaces are deliberately designed for that purpose. That is, when clear rules apply - enforced, for example, through a code of conduct or moderation - and platforms are strategically designed to promote constructive exchange.

The most effective measure lies in all of our hands: withdrawing our attention. Less time on social media, less interaction, less fuel for the algorithms. Only when the constant stream of clicks and likes dries up will the systems lose their foundation.

At the same time, alternatives have long existed that demonstrate how digital communication can function differently. Moderated communities like Discourse or networks like Mastodon rely on clear rules, moderation, and structured discussions where arguments matter, not escalation. Decentralized systems like Matrix escape the logic of centrally controlled algorithms and enable communication organized chronologically or thematically.

Collaborative tools such as Etherpad, HackMD, and Nextcloud Talk also create spaces for collective thinking and writing. In addition, newsletters, blogs, and podcasts offer formats that emphasize depth and reflection. Finally, deliberative platforms like LiquidFeedback and Decidim demonstrate that structured digital participation is possible when moderation and clear procedures work together.

But who actually uses these platforms?

Around the world, more and more people are looking for digital spaces where they can retain control. Platforms like Mastodon and Nextcloud are seeing growing user numbers, especially in the wake of crises involving major services, such as Elon Musk’s takeover of Twitter in 2022, when hundreds of thousands flocked to Mastodon. These communities are still small compared to YouTube, TikTok, or X, but they clearly demonstrate that alternatives exist. By becoming active here, we deprive the big platforms of their most valuable resource - our attention.

Ways Out of the Radicalization Machine

Algorithms designed to maximize engagement have turned once-open social spaces into attention traps. While we are distracted, platforms, the advertising industry, and data brokers profit. Our attention becomes a currency. Yet many do not break free from this cycle because they are addicted to the very content that harms them. A jury in the U.S. has just ruled that this is no longer tolerable and fined Meta and YouTube millions of dollars.

What can we learn from this? The dynamics of radicalization are not inevitable. Those who understand the mechanisms can act consciously: withdraw attention, reduce engagement, and turn to platforms and formats that promote dialogue, creativity, and trust. There are an increasing number of moderated forums, decentralized networks, collaborative tools, and thoughtful media formats that demonstrate that constructive exchange is possible in digital spaces.

The radicalization machine isn’t invincible. It exists because we give it our attention. If we consciously choose where and how we communicate, we can disrupt it and reclaim digital spaces. A return to the future is possible if we regain control over our attention. For some of us, myself included, this will feel like going through withdrawal. That’s why I recommend starting with small steps. Perhaps by creating an account on Mastodon, where you can see if people you know from other networks are active there. Or by no longer consuming media content exclusively via YouTube, but instead using a news aggregator like EUCube, which simply searches for content based on your thematic preferences through the open interfaces of all kinds of news portals and then curates it clearly—without ads, without pressure to engage.

What do you gain from this? Autonomy over your own topics and opinions. Personal safety, because you don’t expose yourself to extremist discourse. Increased focus, because you decide for yourself when enough is enough.

Perhaps the greatest freedom in the digital age lies not in being able to see everything—but in consciously choosing what we no longer want to see.