Prof. Jun Pang contributes to new ACM Europe policy brief.
Artificial intelligence is moving faster than people expected and the latest developments are no longer just about smarter chatbots or better prediction tools. A new kind of system is emerging, often called Agentic AI, that doesn’t simply wait for instructions. It can decide what to do next, set its own goals, and act on them. This shift might sound subtle, but it has far-reaching consequences.
Recognising this turning point, Prof. Jun Pang from the University of Luxembourg has contributed to a new policy brief from the ACM Europe Technology Policy Committee, examining how Agentic AI fits (or rather, doesn’t quite fit) within the current EU regulatory landscape.
Agentic AI: ChatGPT but stronger?
People often ask whether Agentic AI is just “ChatGPT but stronger.” Not exactly. While tools like ChatGPT respond to whatever we type, Agentic AI systems might decide on their own that a task should be completed, or that a certain approach is more efficient, and then go ahead and execute it. As Prof. Pang puts it, “ChatGPT is something you steer. Agentic AI behaves more like a co-worker who takes initiative, sometimes in ways you didn’t explicitly request.”
This idea of initiative is the core difference; it’s not about intelligence levels but about the ability to act independently.
Why does this matter for regulation?
European regulation, including the much-discussed EU AI Act, was built around the assumption that AI behaves like other software products: predictable, bounded and under human command. Agentic AI breaks that frame. When a system can pursue goals autonomously, the old vocabulary of “risk categories” and “intended use” becomes harder to pin down.
Prof. Pang notes that current rules are fundamentally static. They assume that once a system is approved, its behaviour can be understood and managed. But an autonomous system might evolve, interact with other agents, or behave in ways that weren’t anticipated during certification. In other words, the challenges don’t end once the system is deployed; they start there.
The policy brief points out that this gap is significant enough that Europe will likely need new forms of ongoing supervision rather than one-off assessments.
How could this affect day-to-day life?
For most people, Agentic AI will probably show up first as convenience: services that handle routine tasks before you ask, or digital tools that coordinate small parts of your day without being prompted. Imagine an assistant that not only reminds you to renew a document but actually starts the process on its own.
But more autonomy also means new kinds of mistakes or, in some cases, new openings for misuse. A system making its own choices could unintentionally spread misinformation, act on outdated objectives, or interfere with human workflows in ways that create real disruption. These practical challenges will require careful management.
Agentic AI as a personalised tutor
Education may be one of the areas where Agentic AI becomes visible at the earliest. A system that adapts lessons to each student could transform learning, benefiting students who struggle in traditional classrooms. It wouldn’t just answer questions, but also plan the next steps in the learning journey.
Prof. Pang also points out that research is already seeing early signs of AI agents proposing hypotheses or running small experimental steps. These aren’t replacements for scientists, of course, but they can act as useful collaborators. The idea of an “AI scientist” still feels experimental, but it’s no longer theoretical.
‟ Agentic AI could act as a personalised tutor, adapting lessons to each student’s needs.”
Professor in the Department of Computer Science
What happens now?
The ACM Europe Technology Policy Committee frames the work as a starting point, a prompt for deeper debate. Agentic AI is evolving fast, and regulation usually moves at the pace of institutions, not algorithms. Keeping the two in dialogue will be an ongoing project.
Prof. Pang joined the ACM Europe policy group because he believes technical expertise must sit at the same table as policymakers. “Technology policy shouldn’t be driven only by markets,” he notes. “Academics help keep long-term societal impact in view.”
His involvement reflects a broader point: without researchers helping to interpret what these systems can actually do, regulation risks being either too timid or too rigid.