Many of us have invited Artificial Intelligence (AI) into our lives without knowing (or caring) how it works: Siri™, Alexa™, Deepl™, and Google Translate™ are just a few examples of it. Similarly, prosecutors and courts have brought in sophisticated AI to help them investigate, prosecute, and judge criminal behaviour barely knowing how it works. In doing so, they have sailed into uncharted waters. To date, no one has actually paid much attention to whether the current rules courts and prosecutors must abide by, are enough to protect human rights standards (e.g., right to privacy and data protection and to effective judicial protection) and fundamental criminal procedure principles (e.g., judicial independence and the presumption of innocence) when AI, rather than a human, finds or produces the evidence relied on. That must change: we must know that courts and prosecutors honour and respect those hard-won standards and principles. Our project is designed to find this out. Gathering highly qualified experts in criminal law, IT law, data protection law, as well as in AI and machine learning, cybernetics, ethics, and through talking with representatives of the private sector developing AI, this cutting-edge, 3-year project will examine critical questions about using AI in criminal proceedings. We will compare its use in France, Germany, Israel, Luxembourg, the Netherlands, the United Kingdom, and the United States. To the extent we find that the existing rules are not able to protect existing standards and principles, we will propose innovative legal rules and principles for national and supranational bodies to ensure that judicial authorities reap the benefits of AI while still respecting human rights and fundamental principles.