Dr. Elif Biber is a legal scholar in European Public Law and Digitalisation at the University of Luxembourg. She works at the intersection of law and emerging technologies. Her forthcoming book, “A Rights-Based Inter-Legal Approach to Artificial Intelligence” from Hart Publishing (Oxford) is one of the first single-authored books on AI and fundamental rights worldwide. It seeks to answer the question: How can legal systems meaningfully respond to complex, rapidly evolving phenomena like AI?
She discusses that while artificial intelligence is often presented as something entirely new, at its core it raises enduring questions about power, responsibility, and the role of law in society. Today, however, these questions arise in a context where data and information have become strategic resources, central to what is often described as digital or information capitalism. In this environment, information is no longer a by-product of technology: it is a key driver of economic productivity, social organisation, and shifting power relations.
Three questions to understand what’s at stake
In this interview, we explore Dr. Biber’s research into how the complex layers of legal, corporate and regulatory systems interact with the questions arising from increased use of artificial intelligence, and why the regular citizen should feel concerned.
AI systems are not merely technical tools; they increasingly function as socio-technical systems, actively mediating and reshaping institutions, decision-making processes, and power structures. Their applications now extend across a wide range of domains, including social services, public administration, and essential human services, where they influence outcomes that directly affect individuals and communities.
In this sense, AI regulation is neither abstract nor purely technical, it is deeply embedded in everyday life. The rules governing AI shape how decisions are made, how resources are distributed, and how individuals experience fairness, accountability, and rights in practice. Increasingly, decisions that affect individuals are mediated by AI systems. These include whether you are approved for a loan, how your personal data is collected and used, what content you see online, and how public services interact with you.
From an inter-legal perspective, these decisions are governed by a complex mix of legal and non-legal rules: national laws, European frameworks, international standards, and corporate policies. If the underlying rationales of these different layers are not properly understood or given due consideration, individuals may face bias and discrimination, lack of transparency, weak accountability, and limited avenues for redress.
| Important to know: AI is not merely a supportive technology: it can play a primary decision-making role, influencing outcomes that were once determined by human judgement. |
The theory of inter-legality starts from a simple but powerful observation: law today does not operate as an isolated and self-contained system. Instead, it functions as a complex web of overlapping legal orders; national laws, EU law, international human rights frameworks, and even private rules like corporate policies or technical standards. In simple terms, instead of asking “which law applies?”, inter-legality asks: “how do different legal systems interact, influence each other, and sometimes conflict in a particular situation?”
For example, consider an AI system used across borders. Its operation may be shaped simultaneously by EU data protection rules, national legislation, international human rights obligations, and internal company policies or ethical guidelines. No single framework fully governs the system; rather, the outcome emerges from their interaction. This approach is particularly useful for AI because AI systems inherently cross borders and regulatory domains – from privacy and intellectual property to human rights and product liability. Moreover, in today’s complex legal environment, a single issue may be governed simultaneously by state law, transnational regulation, and general legal principles, reflecting a shift away from the dominance of purely national legal systems.
| In short: Inter-legality helps us make sense of complexity by treating law as an interconnected structure rather than a hierarchy. Inter-legality provides the conceptual tools needed to understand and regulate AI in a world where legal authority is increasingly plural, layered, and interdependent. |
Balancing innovation and protection is often framed as a trade-off; but from an inter-legal perspective, it is better understood as a question of coordination rather than choice.The aim goes beyond choosing between freedom and regulation; it is to ensure that multiple layers of rules function together, so that no normative actor is reduced to a tool of domination.
In practice, this means allowing flexible spaces for innovation while maintaining non-negotiable safeguards, including fundamental rights, data protection, and accountability.
For example, an AI system used in public administration such as one assisting with welfare allocation, tax assessments, or migration decisions may operate within efficiency-driven frameworks, but it must still comply with strict requirements related to fairness, transparency, due process, and the protection of fundamental rights.
European courts and institutions have increasingly recognised that new technologies can have “serious” and “extensive” impacts on fundamental rights, particularly the right to respect for private life. Cases involving surveillance technologies, facial recognition, and data processing demonstrate that existing legal frameworks are often insufficient or outdated, requiring both judicial interpretation and legislative adaptation.
At the same time, the digital environment has transformed how power operates: public authority is increasingly exercised through hybrid arrangements involving private actors, such as technology companies.
| The case for an inter-legal approach to AI regulation: An inter-legal approach allows us to see regulation as a dynamic interaction between public law, private governance, and technical standards. This makes it possible to adapt to fast-moving technologies without creating harmful regulatory gaps or imposing rigid constraints that could stifle innovation. |
An opportunity to shape society for the better
By bringing different legal layers into dialogue, a rights-based inter-legal approach helps prevent the concentration and abuse of power while ensuring that public purposes are pursued in a balanced and accountable manner. Rather than treating public goals as fixed or purely technical, this approach subjects them to continuous, qualified legal scrutiny, aligning innovation with fundamental rights and democratic values.
When legal layers are properly examined and given due voice – supported by rights-sensitive judicial interpretation – AI can enhance efficiency and innovation while still protecting fundamental rights, dignity, and equality. Ultimately, the way AI is regulated will shape not only technological development, but also the kind of society we live in: how power is distributed, how rights are protected, and how trust in institutions is maintained.
Book: A Rights-Based Inter-Legal Approach to Artificial Intelligence
In her forthcoming book, Dr. Biber explores not only the challenge of balancing innovation with fundamental rights, but also the real-world consequences of AI regulation for individuals, democratic institutions, and the rule of law.
She engages with recent case law from European courts on advanced technologies, alongside real-world use cases, and critically analyses the emerging framework of the EU Artificial Intelligence Act. At the theoretical level, she draws on the theory of inter-legality – articulated by Jan Klabbers and Gianluigi Palombella – to examine how different layers of law interact, overlap, and at times come into tension when regulating AI. Rather than treating law as a single, unified system, she approaches it as a dynamic and interwoven legal landscape, where national, European, international, and private norms jointly shape how AI is governed in practice.