News

AI Insights: Importance of AI Literacy

  • Competence Centre
    18 November 2024
  • Category
    Education
  • Topic
    Computer Science & ICT

It is next to impossible nowadays, not to come across Artificial Intelligence (AI) in our daily lives. In order to help demystify AI and show you what AI actually is, what can and what cannot be done with it, we talked to Sana Nouzri, Postdoctoral Researcher at the University of Luxembourg.

Can you share a bit about your career path and what motivated you to specialize in artificial intelligence, particularly in the intersection of AI and education?
My journey into artificial intelligence (AI) has been both intentional and fueled by a deep, longstanding interest in the field. During my first role at Cadi Ayyad University as professor assisant, I focused on teaching and specializing in software engineering and multi-agent systems, which gave me a strong technical foundation. However, AI had always been a passion of mine, and I was determined to incorporate it into my work. I dedicated time to self-study and completed an MIT certificate to dive deeper into this fascinating world of AI. When I joined the AI Robolab at the University of Luxembourg as a post doc researcher, I had the chance to work directly at the intersection of AI and art. This experience was transformative; supervising data science students and collaborating with artists from various disciplines allowed us to
push creative boundaries, culminating in an exhibition of AI-enhanced artwork. That adventure showed me the collaborative power of AI and solidified my dream of enhancing education through this technology.
My work extends beyond research papers and academic settings; I’m committed to sharing AI concepts with others, whether through courses, workshops, webinars, or community initiatives. It’s inspiring to see how AI can empower people to think differently and approach problems from fresh perspectives. This motivation keeps me constantly learning and innovating because I believe we’re only at the beginning of understanding AI’s potential in education and creative fields. Today, I’m deeply invested in bringing AI into education. I love developing prototypes with students, where they not only learn but contribute to creating new, exciting possibilities in the field. My journey
in AI continues to be shaped by these experiences, and I’m constantly inspired to explore what’s next, especially in how we can reimagine learning through AI.”

As a researcher at the AI RoboLab, you’ve been involved in workshops that teach AI literacy to students. In your view, how where does AI literacy play a role in today’s and future school curricula?
AI literacy is essential in both today’s and future school curricula for several reasons. Firstly, it provides a foundation for future careers. As industries increasingly adopt AI technologies, students need to be equipped with the knowledge to understand and contribute to fields like data science, robotics, and AI development. Including AI literacy in school programs ensures that students are prepared for these future opportunities.
Secondly, AI literacy fosters critical thinking, especially regarding ethical use. When students learn how AI systems function, they also learn about the ethical challenges these technologies present—privacy concerns, bias in algorithms, and the societal impacts of AI decision-making. By embedding these discussions into education, we ensure that students grow up aware of the responsibilities that come with
AI.
Thirdly, it demystifies AI. Many students interact with AI in their daily lives—through platforms like social media or virtual assistants—but they often don’t understand the processes behind them. By introducing AI concepts, such as neural networks or natural language processing, in workshops like Smart Photo Booth and CHATWISE, we make AI more accessible and understandable. This also helps promote inclusivity, especially for students who may not traditionally see themselves in tech careers.
Fourthly and finaly, practical skills are a major benefit. AI literacy goes beyond theory. In workshops and activities, students get hands-on experience with coding and machine learning. This doesn’t just teach them about AI; it develops problem-solving and creativity, skills that are valuable in almost any field.
In summary, AI literacy is about preparing students for the future, both in their careers and as participants in a world increasingly shaped by AI. My work at the AI Robolab, through outreach like the CHATWISE and Smart Photo Booth projects, is a perfect example of how this literacy can be effectively integrated into education. It’s not just about understanding AI, but using it thoughtfully and responsibly.

One of the key aspects of your work is ensuring the ethical use of AI in education. What do you see as the most pressing ethical concerns surrounding AI in the classroom, and how can educators and institutions mitigate these risks?
The ethical use of AI in education raises several significant concerns that must be addressed to ensure its responsible integration into the classroom. One of the main issues is the risk of misinformation and hallucination, Large Language Models (LLMs) are trained on vast datasets, and they can sometimes generate content that is inaccurate, misleading, or even entirely fabricated. This is particularly problematic in educational settings, where students might use AI tools to complete assignments without realizing that the information provided could be incorrect. In CHATWISE, we focus on this drawback by raising awareness among students about these risks, emphasizing the importance of critically evaluating the outputs of AI and verifying the information through reliable sources. Educators can
mitigate these risks by integrating lessons on how to fact-check AI-generated content and teaching students to approach AI as a helpful tool that still requires human oversight. Another concern is the potential over-reliance on AI. During the CHATWISE workshops, we emphasize the importance of using AI responsibly in education. While AI tools like ChatGPT can offer substantial support in generating ideas and solving problems, they also pose the risk of students becoming overly dependent on them. This can result in students neglecting essential skills like critical thinking, problem-solving, and creativity. We make it clear to students that AI should be used as a complement to their learning, not a substitute. By teaching students how to integrate AI tools into their study habits without relying on them exclusively, we help them retain ownership of their learning process. Educators play a vital role here by framing AI as an assistive tool while encouraging students to engage deeply with their academic work.The key takeaway for students was mastering prompt engineering, learning how to interact with AI responsibly while maintaining control over their own learning process. We also can’t overlook the importance of data privacy, especially when students interact with AI tools like ChatGPT. Many AI models require access to personal data or input information to function, and students may unintentionally share sensitive information when using these tools for their studies. In CHATWISE, we highlight the importance of understanding how AI systems handle data and make students aware of the potential risks associated with sharing personal information. Institutions and educators can mitigate these risks by educating students on safe usage practices, such as avoiding the input of personal or sensitive data into AI platforms. Additionally, institutions should implement privacy policies that protect student data and ensure that AI tools used in educational settings comply with strict data protection regulations, such as GDPR. This will help safeguard students’ privacy while enabling them to benefit from AI-enhanced learning experiences.
Lastly, It’s not enough to educate students alone; it has become a necessity to educate their teachers first. Teachers are in a pivotal position to guide students and help them navigate the complexities of AI. In the CHATWISE project, we recognized that teachers need to be equipped with the knowledge and tools to understand both the potential and the risks of AI in the classroom. By educating teachers on how AI works, its limitations, and best practices, they can play a key role in helping students mitigate the risks associated with AI use, such as misinformation or over-reliance. Teachers can act as a first line of defense, ensuring that AI is used responsibly in academic environments and reinforcing critical thinking skills in their students. This approach ensures that students are supported not only by AI literacy programs but also by informed educators who can integrate AI tools in a balanced and ethical manner.

You’ve emphasized the importance of prompt engineering in teaching AI tools like ChatGPT. Could you explain why this skill is so crucial and how individuals can improve their ability to get the most out of AI systems?
Prompt engineering is absolutely crucial when working with AI tools like ChatGPT because it directly affects the quality and relevance of the AI’s responses. In our CHATWISE project, one of the main goals was to teach students that AI systems like ChatGPT don’t inherently ‘know’ anything; they respond based on how you guide them. The way you frame your prompts—whether you’re asking a question or
requesting information—determines whether the AI gives you something useful, accurate, or even creative. This skill is particularly important because LLMs, like ChatGPT, can sometimes produce misleading or incomplete answers if the prompt isn’t clear or specific enough. In our workshops, we worked with students to help them create their own AI assistants, and the key lesson they took away was how to structure their prompts to get the most relevant and accurate outputs. It’s about learning how to think critically about the questions you ask and understanding how AI systems interpret language. Prompt engineering is also one of the key solutions to mitigate the risks associated with LLMs, such as misinformation or hallucination. By carefully crafting prompts, users can guide the AI towards more reliable and contextually accurate answers, reducing the likelihood of receiving false or misleading information. This skill empowers users to get the most out of AI systems while minimizing their risks. To improve in prompt engineering, individuals should practice refining their questions to be as specific and detailed as possible. It’s also helpful to experiment with different types of prompts to see how the AI responds, as well as to always cross-check AI-generated information with reliable sources to ensure accuracy. In CHATWISE, we stressed the importance of iteration—reworking prompts to guide the AI more effectively, which not only improves the interaction with the system but also helps develop better critical thinking skills. It’s a learning process that becomes easier with time and practice.”

You have a unique interest in the intersection of AI and art. How do you think AI is changing the
creative industries, and would the future of these industries keep relying on limited or rather
generalized AI?
AI is already significantly transforming the creative industries, and I believe its impact will only grow. In projects like the Smart Photo Booth, we showcased how AI can blend technology with art, allowing users to manipulate their own portraits using AI-driven style transfer techniques. This is a clear example of how AI is making artistic tools more accessible, enabling people—whether they’re artists or not—to engage with creativity in new ways. It opens the door for experimentation, where AI can serve as a cocreator, generating styles or ideas that might not have been conceived traditionally. AI is also changing how we think about creativity itself. It can analyze vast datasets of artwork, learning patterns and styles, and then create something novel. This doesn’t replace the artist, but it offers them a powerful tool to expand their vision. In the Smart Photo Booth, for instance, AI doesn’t just copy famous art styles but allows users to engage with art history interactively, making artistic expression more personalized and innovative. As for the future, I think we’ll see a combination of both limited and generalized AI in the creative industries. For certain tasks—like specific style transfers or focused applications of machine learning— specialized, or more limited AI will continue to be essential. However, generalized AI models, which are capable of understanding broader creative contexts, could play an even bigger role. These systems might one day assist not just in producing art but in areas like creative direction, music composition, and even generating full multimedia experiences. But creativity, by its nature, is very human, and AI will likely remain a tool rather than a replacement for that human touch. AI can enhance creative industries by offering new methods and perspectives, but I believe it will remain a partnership between AI and human creativity, rather than AI overtaking it entirely.

Gender imbalance in AI and tech is a well-known issue. What initiatives do you believe are most effective in encouraging more gender diversity in AI research and development, and what steps can institutions take?
Gender imbalance in AI and tech is indeed a significant issue, and addressing it requires both targeted initiatives and a broader cultural shift. From my experience with projects like CHATWISE and the Smart Photo Booth, one of the most effective ways to encourage more gender diversity in AI is by creating accessible and inclusive educational opportunities. In both projects, we specifically designed workshops and activities that appeal to diverse audiences, ensuring that girls and women feel equally represented and encouraged to participate. In Smart Photo Booth, for example, we focused on blending AI with art, a field that naturally attracts a broader range of participants, including women, who may not typically see themselves in tech fields. By showcasing how AI can intersect with creative industries, we’re able to engage more students— especially girls—in a subject that might otherwise seem inaccessible or purely technical. Similarly, in CHATWISE, we aimed to demystify AI and make it approachable for high school students of all genders, highlighting how AI can be used in a variety of fields, not just in traditional tech roles. Institutions can take several key steps to promote gender diversity in AI research and development. Firstly, they can offer mentorship programs specifically aimed at young women, connecting them with female role models in the field. This helps break down the barriers and stereotypes that often deter women from pursuing careers in AI. Secondly, institutions should actively promote and fund scholarships, grants, and internship opportunities for women in AI and tech, ensuring that financial and professional support is available to help them advance. Finally, institutions need to foster an inclusive environment within their research labs and classrooms. This means not only recruiting more women but also ensuring that their contributions are valued and their voices heard. At the AI RoboLab, for instance, we make sure that our team is diverse and that the projects we work on, like CHATWISE, reflect a commitment to inclusivity. It’s about creating a culture where women feel they belong and can thrive in AI research and development.”

As someone who is deeply involved in educating the next generation about AI, what advice would you give to young professionals or students in the field of computer sciences, engineering or
manufacturing?
My advice to young professionals or students in computer science, engineering, or manufacturing is to embrace both the technical and ethical sides of AI and technology. The world of AI is evolving rapidly, and it’s not just about mastering the technical skills, though that is crucial. You need a strong foundation in programming, machine learning, and data science, but what will set you apart is your ability to think critically about the societal and ethical implications of the technologies you’re developing. Stay curious and never stop learning. Technology moves fast, and so should you. Engage with projects that challenge you to think creatively—whether it’s building something new or finding innovative ways to solve problems. In our projects like CHATWISE and Smart Photo Booth, students were encouraged to experiment and apply AI in new, creative domains like art and education. These interdisciplinary experiences are valuable because they show how versatile AI can be, and that flexibility will serve you well in your career. I also encourage you to seek out mentorship and collaboration. No one succeeds alone, and in fields like AI and tech, collaboration is key. Surround yourself with peers and mentors who challenge you and help you grow. Find communities, both in-person and online, where you can share knowledge, exchange
ideas, and stay updated on the latest trends. Finally, never underestimate the importance of ethics in your work. AI has incredible potential to transform industries, but it also raises significant ethical questions—about privacy, bias, and fairness. Being aware of these issues and making responsible decisions will be vital for the future of the field. I often remind my students that their work doesn’t exist in isolation; it affects real people. Be mindful of the impact you’re having and strive to build technology that benefits everyone.

There is growing interest among the wider public about AI’s role in daily life. What misconceptions about AI do you frequently encounter, and how can public understanding of AI be improved?
One of the most common misconceptions I encounter is the belief that AI is either going to solve all of our problems or, conversely, that it poses an imminent threat to humanity. Both views tend to oversimplify the reality of AI. In reality, AI is a tool—an incredibly powerful one—but its impact depends entirely on how we use it. In projects like CHATWISE and Smart Photo Booth, we’ve focused on demystifying AI for the public by showing that it’s not magic, but a technology rooted in data and algorithms. People need to understand that AI is not sentient; it doesn’t ‘think’ or ‘understand’ in the way humans do, and it relies heavily on the data it is trained on, which can sometimes lead to biases or errors. Another misconception is that AI will replace humans in all jobs. While AI is transforming industries, it’s important to emphasize that AI will augment human capabilities rather than replace them entirely. For instance, in Smart Photo Booth, we showcased how AI could be used creatively, where human input and creativity are still vital to the process. AI can assist and enhance, but human judgment, empathy, and creativity remain irreplaceable. Improving public understanding of AI begins with education and transparency. In our CHATWISE project, we aimed to provide clear, accessible information about AI, breaking down complex topics like natural language processing and machine learning into relatable examples. We encourage critical thinking—teaching people how AI works, where it can be useful, and where its limitations lie. Workshops, outreach events, and interactive experiences like those we developed are key in making AI
more approachable. To truly improve public understanding, we need to continue these efforts and ensure that conversations around AI are grounded in reality, highlighting both its benefits and potential risks. By educating people early on and making AI more transparent, we can help the public engage with AI responsibly and avoid the extremes of fear or over-enthusiasm.