Artificial intelligence is changing the world – but who will protect our rights when algorithms take decisions? This was one of the topics discussed at the summer school in Braga.

This summer, in July, I attended a summer school in Portugal. We were welcomed in Braga by students from the University of Minho. Throughout the week, we had 20 hours of academic programming covering autonomous vehicles, data protection, AI patents, and human rights violations by Artificial intelligence (AI). 

Participants had the opportunity to understand not only the technical aspects of how algorithmic systems work, but above all also the legal and ethical challenges that AI brings. Discussions opened up key topics such as privacy protection, algorithmic discrimination, transparency of decision-making processes, and liability for damage caused by autonomous systems. 

The summer school thus combined theoretical knowledge in the fields of law, philosophy, and IT with practical issues of legal regulation. At a time when artificial intelligence is becoming a common part of public administration, healthcare, and justice, education in this area is becoming increasingly important.

AI and human rights are now in a close and increasingly tense relationship. On the one hand, AI has the potential to promote and protect fundamental rights in ways that were unimaginable a decade ago. For example, AI-powered translation tools can break down language barriers and open up access to education for people in remote areas. Similarly, advanced diagnostic algorithms can help doctors identify illnesses earlier and more accurately, thereby improving the right to health and even saving lives. In this sense, AI is not just a technological innovation but also a tool that can expand equality and opportunity.

On the other hand, the risks connected to AI are just as significant. Automated decision-making systems are increasingly used in hiring, policing, or access to financial services — and when these systems are based on biased data, the results can reinforce existing inequalities and discrimination. 

Victims of such outcomes often find it difficult to seek justice, because the decision-making process is hidden behind complex algorithms, raising questions about accountability and transparency. This makes the problem of “black box” algorithms as one of the core human rights issues of our time.

Another area of concern is privacy and data protection. AI systems depend on vast amounts of personal data, and without strict safeguards, this information can be misused for surveillance, profiling, or manipulation. This directly threatens not only the right to privacy but also related rights such as freedom of thought, freedom of expression, and even the right to participate in democratic life. The use of AI by states in security and policing contexts has already sparked debates about the balance between safety and civil liberties.

At the heart of the current legal debate are therefore fundamental questions: How can we design laws that ensure algorithmic transparency? Who should bear responsibility for discriminatory or harmful AI outputs — the developer, the company that deploys the system, or the state that allows its use? And perhaps most urgently, how can we adapt existing human rights frameworks to a digital world where decisions about people’s lives are increasingly made not by humans, but by machines? 

These challenges show that the regulation of AI is not just a technical or economic issue, but above all a question of protecting the values on which democratic societies are built.

Many academics from the University of Minho have repeatedly emphasized the pitfalls and potential dangers associated with the careless handling of personal data. They highlight that in today’s world, sensitive information has become one of the most valuable assets — often more valuable than money itself. Because of this, both artificial intelligence systems and human actors may seek to exploit it in ways that the general public does not fully realize.

One lecturer gave a striking example involving iris scanning technologies. Increasingly, people — especially younger generations — are turning to iris scanning programs as a quick way to earn extra money or access digital services. The process appears harmless: a simple scan, a short transaction, and an instant reward. However, what remains unclear is how this biometric data is stored, who controls it, and what it could be used for in the future.

The danger lies in the fact that unlike a password, biometric data cannot be changed. Once our iris patterns are compromised, they are compromised forever. This creates opportunities for surveillance, identity theft, or even commercial misuse, where companies may profit from selling or analyzing such data without the individual’s knowledge or consent.

Furthermore, the lecturer stressed that the lack of transparency and accountability surrounding these technologies is alarming. Many of the companies offering incentives for iris scans are startups with limited oversight, often operating across borders, making regulation and enforcement even more difficult. This legal vacuum creates conditions where exploitation becomes not only possible but likely.

Ultimately, the discussion served as a warning: while technological innovation can bring opportunities, it also raises serious questions about trust, control, and rights. If individuals continue to hand over their most intimate identifiers without understanding the consequences, society may face challenges that go far beyond privacy — extending to issues of autonomy, dignity, and security in the digital age.

Beyond the academic program, we naturally explored the beauty of Portugal. We visited Braga, Guimaraes, and other cities. Above all, we got to know other students from other law schools, which was also enriching.

Photos

[1] Participants at dinner. 18.7. 2025 author: Gabriela Tomečková