Percorso della pagina
- Law
- Master Degree
- Human-Centered Artificial Intelligence [F552MI - F551MI]
- Courses
- A.A. 2025-2026
- 2nd year
- Responsibility and Ai
- Summary
Insegnamento
Course full name
Responsibility and Ai
Course ID number
2526-2-F551MI044
Learning objectives
The course aims to provide students with an in-depth understanding of the legal challenges related to the use of artificial intelligence (AI), focusing on responsibility, liability, and data protection within the EU legal framework. Students will explore the main regulatory initiatives, including the AI Act, the revised Product Liability Directive, and the proposal for an AI Liability Directive. The course examines how responsibility and liability are allocated among AI developers, providers, and users, with particular attention to non-contractual liability and risk distribution. A specific focus will be placed on data protection and privacy issues arising from the use of AI systems, especially regarding GDPR compliance, profiling, and automated decision-making. Students will also become familiar with legal tools for managing risks linked to AI applications, and develop the ability to assess and resolve complex legal questions in this evolving field.
Contents
The course explores the regulation of artificial intelligence (AI) within the EU legal framework, with a specific focus on data protection and privacy. It examines the main legislative instruments relevant to AI, including the GDPR, the AI Act, and the proposed AI Liability Directive, as well as the revision of the Product Liability Directive. Particular attention will be given to the legal issues related to automated decision-making, profiling, and the allocation of responsibility among AI developers, providers, and users. The course will also address risk management mechanisms applicable to AI systems, especially in data-intensive contexts. The aim is to equip students with the legal tools to understand and address the emerging risks and responsibilities linked to the use of AI in compliance with EU data protection law.
Detailed program
The course will cover the following topics, which will be discussed interactively with students during the lectures:
• Introduction to the European legal framework governing artificial intelligence and responsibility
• Risk assessment and legal standards in the Council of Europe’s Framework Convention on Artificial Intelligence
• Fundamental rights and market freedoms in the EU Treaties and the Charter of Fundamental Rights of the EU
• EU secondary legislation on artificial intelligence and legal responsibility
• Risk classification and compliance mechanisms in the Artificial Intelligence Act
• Duties and liabilities of AI operators under the AIA and other relevant legal instruments
• Cybersecurity requirements and their intersection with AI systems
• Applications of AI in data governance and automated decision-making
• Artificial intelligence and data protection: GDPR compliance, profiling, and privacy risks
• Legal challenges of AI-driven data processing and automated individual decisions
• Accountability, sustainability, and long-term regulatory strategies for AI
Prerequisites
There are no prerequisites, although an experience in legal studies or cognate subjects can be useful.
Teaching methods
The course will be mainly delivered through seminarial lectures.
Assessment methods
Textbooks and Reading Materials
The presentations of the lectures as well as a list of either compulsory or supplementary readings that the Professor will publish on the e-learning before each lecture.
Sustainable Development Goals
DATA PROTECTION, AI
The course aims to provide students with an in-depth understanding of the legal challenges related to the use of artificial intelligence (AI), focusing on responsibility, liability, and data protection within the EU legal framework. Students will explore the main regulatory initiatives, including the AI Act, the revised Product Liability Directive, and the proposal for an AI Liability Directive. The course examines how responsibility and liability are allocated among AI developers, providers, and users, with particular attention to non-contractual liability and risk distribution. A specific focus will be placed on data protection and privacy issues arising from the use of AI systems, especially regarding GDPR compliance, profiling, and automated decision-making. Students will also become familiar with legal tools for managing risks linked to AI applications, and develop the ability to assess and resolve complex legal questions in this evolving field.
Contents
The course explores the regulation of artificial intelligence (AI) within the EU legal framework, with a specific focus on data protection and privacy. It examines the main legislative instruments relevant to AI, including the GDPR, the AI Act, and the proposed AI Liability Directive, as well as the revision of the Product Liability Directive. Particular attention will be given to the legal issues related to automated decision-making, profiling, and the allocation of responsibility among AI developers, providers, and users. The course will also address risk management mechanisms applicable to AI systems, especially in data-intensive contexts. The aim is to equip students with the legal tools to understand and address the emerging risks and responsibilities linked to the use of AI in compliance with EU data protection law.
Detailed program
The course will cover the following topics, which will be discussed interactively with students during the lectures:
• Introduction to the European legal framework governing artificial intelligence and responsibility
• Risk assessment and legal standards in the Council of Europe’s Framework Convention on Artificial Intelligence
• Fundamental rights and market freedoms in the EU Treaties and the Charter of Fundamental Rights of the EU
• EU secondary legislation on artificial intelligence and legal responsibility
• Risk classification and compliance mechanisms in the Artificial Intelligence Act
• Duties and liabilities of AI operators under the AIA and other relevant legal instruments
• Cybersecurity requirements and their intersection with AI systems
• Applications of AI in data governance and automated decision-making
• Artificial intelligence and data protection: GDPR compliance, profiling, and privacy risks
• Legal challenges of AI-driven data processing and automated individual decisions
• Accountability, sustainability, and long-term regulatory strategies for AI
Prerequisites
There are no prerequisites, although an experience in legal studies or cognate subjects can be useful.
Teaching methods
The course will be mainly delivered through seminarial lectures.
Assessment methods
Textbooks and Reading Materials
The presentations of the lectures as well as a list of either compulsory or supplementary readings that the Professor will publish on the e-learning before each lecture.
Sustainable Development Goals
DATA PROTECTION, AI
Key information
Field of research
IUS/14
ECTS
6
Term
Second semester
Activity type
Mandatory to be chosen
Course Length (Hours)
24
Degree Course Type
2-year Master Degreee
Language
English