There is one session available:
Ethics in AI Design
About this course
Skip About this courseAI systems have a great potential to improve society, across a wide range of applications. The challenge is to do so responsibly. AI systems can lead to discrimination, loss of human control and a lack of explainability, to name a few ethical dilemmas they may present. Because of the great impact that AI and Machine Learning has (e.g. ChatGPT by OpenAI, or the use of ML for medical diagnoses), we need to ensure that we design and use them in a way that meets ethical standards.
To do this requires a pro-active attitude towards ethics, for which we use the Delft Design for Values methodology. This identifies ethical values and has tools to translate them into concrete design requirements, which can then be tested. For this course, the focus is on aligning programming and design decisions with ethical values.
This course is for professionals developing AI systems, or for managers overseeing AI developments. The Design for Values methodology offers guidance on how to tackle the wide range of ethical challenges in the design process of AI systems. You will learn about bias, transparency, control, accountability, trust, and more. There will be a focus on the connection between the technological tools available and the ethical values. Most of all, you will practice how to make AI Ethics actionable and applicable for a wide range of systems and use cases.
To learn how to put ethics into practice we will use a running example from AI in healthcare. You will be challenged to think about how best to design an AI system in this context, while taking important ethical values into account. We will also work with other use cases from various sectors, such as government or industry, to see how ethical values and their consequences change from situation to situation.
This course has been designed by TU Delft’s experts on Digital Ethics, who hold a world-leading position in the operationalization of ethics in digital technology. They have played a central role in setting the EU directives on ethics, as well as the WHO Guidelines on AI Ethics in healthcare, and will now help you to put ethics into practice.
At a glance
- Institution: DelftX
- Subject: Philosophy & Ethics
- Level: Intermediate
- Prerequisites:
Basic knowledge of the technical development of AI systems is ideal, but not strictly required
- Language: English
- Video Transcript: English
What you'll learn
Skip What you'll learnAfter this course you’ll be able to:
- Identify and explain possible ethical issues in AI design and development
- Analyze what ethical issues could arise in AI applications
- Determine steps to take for more responsible use of AI applications
- Apply the steps involved in responsible AI design
Syllabus
Skip SyllabusWeek 1
Introduction to the Ethical challenges with AI. A first overview of how ethics interacts with the design and use of AI systems.
Overview of how we will tackle these challenges during the course: introduction of the Design for Values methodology and its application to AI design.
We look into identifying values of different stakeholders and on the methods to then operationalize these, applied to our central case study in the healthcare domain.
Week 2
Trustworthiness of AI systems, accuracy and explainability.
When is an AI system trustworthy, and how does this interact with requirements for accuracy and explainability? Should systems always be explainable? What do we focus on with respect to accuracy and reliability/robustness of systems?
In addition, we briefly look into what is technically available. How explainable are these systems? What tools are available to improve the explainability of AI systems, and when do we use them?
Week 3
Bias in data and algorithmic fairness.
We will discuss both philosophical conceptions of fairness and bias as well as the connection to concrete metrics and tools that can be used to monitor and correct for biases.
Investigate which (statistical) biases are problematic and what appropriate steps are to tackle them.
Week 4
Accountability and human oversight.
Who is responsible when mistakes are made with AI systems? What organisational/socio-technical design is needed to ensure responsible use of AI?
Human oversight is discussed and the notion of meaningful human control is introduced. We then look at how oversight can be implemented in different ways, both technically (logs, audits, etc) and in the organisation (the role we give to an AI system).
Week 5
Value conflicts: what to do when different ethical values are difficult to realise at the same time?
Final assignment: for a new case, conduct the translation of ethical values into design requirements yourself.
Learner testimonials
Skip Learner testimonials"I have had a particular interest in the development of AI in the public sector. The focus on values related to the design of AI is a very important matter which should be a fundamental basis for everyone who is directly or indirectly involved in the management of projects for the public sector." - Adrien Disteldorff, Project Manager from Luxembourg
"I was looking for more knowledge on how to ask the right questions, while preparing for submission to regulatory authorities. This course offered good, out of the box examples." - Piet de Jong, Risk Analysis Manager from Italy
Certificate | Free | |
---|---|---|
Price | $149 USD | - |
Access to course materials | Unlimited | Limited Expires on Dec 20 |
World-class institutions and universities | ||
edX support | ||
Shareable certificate upon completion | ||
Graded assignments and exams |