The stellar rise of large social media networks has opened pathways for unethical, even criminal uses, such as defamation, hate speech, incitation to violence or electoral interference. Anyone is potentially at risk: children and youth, minorities, entire societies and perhaps… democracy itself?
Are our laws sufficient to protect users and democracy? Or does the regulation of popular networks endanger the right to freedom of expression? Science.lu from the Luxembourg National Research Fund (FNR) interviewed Prof. Mark Cole, an expert in European media law at the University of Luxembourg.
Calls for stronger regulation of social media grow louder. What is the legal situation in Europe: How far does freedom of expression go – what can you say and what not?
Mark Cole: It’s not an easy line to walk. First of all, freedom of expression is a fundamental right. It allows individuals to express their opinions freely in any medium. But freedom of expression also has a collective dimension. The European Union and its member states also see it as a contribution to democracy. After all, freedom of expression and opinion-forming are prerequisites for democratic elections.
Many believe that any restriction on what one is allowed to say endangers the very idea of freedom of expression. However, this right is not absolute. The state can certainly introduce rules and prevent the dissemination of certain content. According to EU and national laws, content that is clearly illegal is not covered by freedom of expression. This includes material showing the sexual abuse of children and the promotion of terrorist acts.
‟ Above all, we must not fall for the narrative that regulating digital services hinders innovation.”

Full professor
Are insults and lies, such as those we see on social media even from prominent politicians, protected by freedom of expression because they are not illegal?
Mark Cole: Fundamental rights such as freedom of expression are always weighed against other rights and interests. When a critical expression of opinion becomes a punishable offence depends on the context. In Thai law, for example, even the slightest criticism of the king is severely punished. The target of the insult also plays a role. A politician has to endure significantly more criticism than a private individual. Legally, it becomes difficult when entire groups are insulted because then the question of victim status arises.
The expression of ‘false facts’ – that is, non-facts – that are presented as being correct is not protected as such by the right to freedom of expression. It is not illegal, but it is potentially harmful. Whether lies are punishable also depends on the respective legal culture: have the facts been deliberately twisted here, or is the false claim part of a person’s distorted world view? In Germany, for example, it is illegal to deny the Holocaust. Relativising the severity of the Holocaust is a terrible political conviction, but it may fall under freedom of expression. If charges are filed, the courts will have to decide.
‟ Social platforms do not see themselves as media. Originating in the US, the prevailing attitude is that they only provide the technical platform for conveying opinions, just as telephone companies provide the lines for calls.”
But a punishment would take years, and insults and lies will continue to spread for so long. To what extent can the platforms be held liable at all?
Mark Cole: The question is indeed how legislators can regulate online platforms. Traditional media such as print media, radio or television monitor the truthfulness of messages before publishing them. But the social platforms do not see themselves as media. Originating in the US, the prevailing attitude is that they only provide the technical platform for conveying opinions, just as telephone companies provide the lines for calls. That is why they are subject to a so-called liability privilege, according to which the platforms are generally not responsible for content.
Illegal, punishable content is an exception. The operators must remove this from the network as soon as they become aware of it – this is the so-called ‘notice and take down’ principle. So there is certainly moderation of content. Facebook has trained its moderators for years with the help of a
manual full of examples of what they have to delete. But in the European Union, it is believed that the large online platforms, with their billions of users, bear a great responsibility because of the enormous reach and must do more to combat hatred and violence than just react to illegal content. That is why there has been a legislative reaction already.
Since 2024, the Digital Services Act has been implemented in Europe. This regulation governs digital media. What exactly does it allow and what does it prohibit?
Mark Cole: The liability privilege still applies. What is new, however, is that online platforms must now meet a duty of compliance. The European Union’s Digital Services Act, or DSA for short, requires online platforms to disclose to users, before they open an account, how the platform handles user content, what criteria it uses to screen content, what it deletes or blocks, and according to which logic internal processes such as the recommendation algorithm use to push certain content and advertising into a user’s personal feed. This is because researchers and regulators now understand how the platforms work and what role algorithms play, and they should be given even better insights.
The platforms are therefore obliged to be more transparent. Very large platforms such as Facebook and search engines such as Google must also regularly submit transparency reports. There are public transparency databases where you can follow – in general terms and in real time – how much content is currently being moderated. In addition, there is increased protection for children, e.g. a ban on targeted profile-based advertising for minors.
‟ I fear that it is only a matter of months before calls for ‘total freedom’ of social media will also become louder in Europe.”
Sanctions have also been introduced. The EU or national authorities can impose fines of up to six per cent of a platform’s annual revenue. It is debatable whether this will achieve much. But it does mean that there is much more intensive supervision in Europe than before – while the new US government is currently dismantling regulation.
Freedom of expression fundamentalists in the US consider European regulation to be downright crazy. The European Union is therefore taking a remarkable step with the new digital law.
Is the young digital regulation already having an effect? Are there any prominent complaints?
Mark Cole: The EU has been very quick to initiate proceedings, for example at the end of 2023 against ‘X’, whose activities the European Commission considers to be potentially incompatible with European law. The proceedings are still ongoing, but I think the Commission will do everything it can to bring them to a conclusion as quickly as possible in order to set an example given the current political situation in the US, which is moving towards total deregulation. Proceedings are also underway against the Chinese platform TikTok. This is because the DSA makes platforms that specifically target children and young people subject to regulations on the protection of minors.
In addition, TikTok took a globally launched version of its platform, TikTok Lite Rewards, off the market in Europe because it could not have been offered in compliance with the EU Digital Services Act. This version would have further increased the ‘scroll-on effect’ and thus the addictive potential for users. So the new EU regulation is already having an effect.
The US Meta corporation, with its platforms Facebook and Instagram, completed a U-turn on its stance to responsibility in the wake of the recent US elections. Does that make it more difficult to enforce the law?
Mark Cole: Politically, the situation has indeed become more difficult with Trump’s election. “X” had already abolished content moderation and controls before that, since its takeover by Elon Musk. Studies show that disinformation and hate speech on X have increased since the takeover. Facebook, on the other hand, had for years been committed to a code of conduct for the protection of users and employed many internal fact-checkers, but surprisingly changed its policy after the elections.
Personally, I fear that it is only a matter of months before calls for deregulation and ‘total freedom’ for social media will also become louder in Europe, albeit not as vehemently as in the US. But despite these trends, our legal framework will have a measurable effect in the medium term.
In the US, Facebook wants to change its policy against disinformation, no longer conduct external fact-checking and only rely on community notes – i.e. warnings from users. Would that also be permissible in Europe and is it enough to filter out disinformation?
Mark Cole: I don’t find META’s argument that internal controls or community notes can replace external fact checkers convincing – especially not when it comes to disinformation. This describes a deliberate spreading of falsehoods with the intention of deceiving and undermining a society.
Examples of this are the Russian government’s election manipulation campaigns.
The new EU Digital Services Act requires the major online platforms to assess, mitigate and take appropriate action on risks. However, the law does not prescribe what these measures should be. Europe can, however, refer to the ‘Code of Practice for Countering Disinformation’, a voluntary declaration that over 40 companies have signed – including Meta with its platforms Facebook or Instagram. Elon Musk’s ‘X’ has withdrawn its signature. Among other things, the code states that fact-checkers are indispensable in the fight against disinformation. From 1 July this year, the code will become a code of conduct under the DSA, a kind of industry standard that law enforcers in Europe can refer to.
All platforms must also have easily accessible complaint mechanisms so that users can report questionable content with a single click. But in the US, this ‘flagging’ of content when the Community Notes procedure is used will only have consequences if several people with different political views agree that a particular piece of content is problematic. This is already causing headaches for lawyers.
Who is responsible for enforcing European law on social media platforms?
Mark Cole: For the very large platforms that reach people across Europe, the European Commission. But there are countless smaller social media platforms in the member states, and for these, the member states must appoint authorities to enforce the law. Many states are lagging behind. In Luxembourg, the Competition Authority will be responsible, but the necessary law has not yet been formally adopted. So the structures for enforcing the new EU law everywhere do not yet exist.
Do US companies like X or Meta have to comply with EU law at all?
Mark Cole: Yes, if they offer their services to people living in the EU. In any case, companies like Meta are present in Europe with their own companies. If they still don’t comply with EU law, then in theory the market could be completely closed to a US company. This is something that is being attempted in Australia, for example. The government there has banned platforms from offering their services to young people under the age of 16.
‟ Perhaps the time has come to regulate the use of social media in the same way that legislators protect young people from alcohol or drugs.”
Would the Australian solution also be enforceable in Europe?
Mark Cole: It is quite conceivable that Europe will move in this direction. Personally, I don’t have any social media accounts for data protection reasons, but I am informed and have been researching the regulation of social networks for over ten years. During this time, public opinion on the use of digital services has undergone a complete turnaround. Initially, many feared that our students were getting in touch with it too late and that Europe would be left behind in the global technology race. Now, the main concern is the damage caused by the overuse of mobile devices, the internet and social media.
Perhaps the time has come to regulate the use of social media in the same way that legislators protect young people from alcohol, violent content or drugs through youth protection laws. I don’t mean absolute bans, but regulated access to protect children and young people. It would not be easy to enforce this legally. On the other hand, technical solutions are currently being developed to make age checks more effective for young people opening a social media account, for example with the help of facial recognition.
What else helps, apart from bans and regulation: education, media education?
Mark Cole: There are no easy solutions.
The most important counterweight to unmonitored content in social media is the traditional media with their editorial responsibility. Europe has a very diverse and strong landscape of newspapers, TV and radio stations, including public ones. This media diversity must be protected. Thanks to the state press subsidy, which is designed to maintain media diversity, Luxembourg is already doing this.
Secondly, it takes a great deal of knowledge to recognise problematic content. This applies to all users, from young to old, but especially to young people. They no longer use traditional media to check whether a news item is true. Media education is therefore essential, not least to understand how a recommendation algorithm works. For example, if a user watches the interview between Elon Musk and the leader of the AfD, Alice Weidel, on X, the algorithm considers that user to be an AfD supporter and then regularly feeds him or her AfD videos in his or her personal feed.
Furthermore, everyone can set an example. The Grand Ducal Court has announced that it is leaving the ‘X’ platform; the University of Luxembourg has already closed its account in 2023. More and more institutions no longer want to operate in the X environment. This will not impress Elon Musk, but it is a good statement. Of course, it is difficult for a politician to campaign without such a communication channel – but doing so shows the values one stands for.
Can social pressure persuade large platforms to regulate themselves more strongly out of concern for their image and advertising revenue?
Mark Cole: Our market of around 500 million people in the European Union is economically too important for the platforms to be able to ignore laws and the rules of the game. That may not apply to X, since Elon Musk is more concerned with political influence than with economic gain.
But platforms like Facebook have an interest in stable societies for economic reasons. They want to continue to generate high revenues. Therefore, I hope that economic pressure and a mixture of regulation and self-regulation will be enough to ensure that Facebook & Co. maintain external fact-checkers on social media in Europe and respect borders.
Above all, we must not fall for the narrative that regulating digital services hinders innovation. Many critics do not want to recognise the extent of the dangers of social media, and regulation is inconvenient. But the destruction of rules must not become the new reserve currency in Europe.
Author: Britta Schlüter
Editing: Michèle Weber, Jean-Paul Bertemes (FNR)
This article was first published by the FNR on www.science.lu.