About
Summary
Algorithms, and the data they process, play an increasingly important role in decisions with significant consequences for human welfare. While algorithmic decision-making processes have the potential to lead to fairer and more objective decisions, emerging research suggests that they can also lead to unequal and unfair treatments and outcomes for certain groups or individuals.
This Summer School is an attempt to engage multi-disciplinary teams of researchers and practitioners to explore the social and technical dimensions of bias, discrimination and fairness in machine learning and algorithm design. The course focuses specifically (although not exclusively) on gender, race and socioeconomic based bias and data-driven predictive models leading to decisions.
The summer school will be filmed and will be available as a MOOC late 2019.
A basic understanding of machine learning is strongly recommended.
What you will learn
- Understanding bias and discrimination
- Exploring the harms from bias in machine learning (discriminatory effects of algorithmic decision-making)
- Identifying the sources of bias and discrimination in machine learning
- Mitigating bias in machine learning (strategies for addressing bias)
- Recommendations to guide the ethical development and evaluation of algorithms
Registration
Registration will open on March 11th at 12h00.
- Government and industry employees: $1500
- Non-profit organization (NPO): $400
- Student: $200
Organizing Committee
- Jihane Lamouri, Diversity coordinator, IVADO
- Golnoosh Farnadi, Postdoc researcher, IVADO
- Martin Gibert, Ethics researcher, IVADO
- Brian Moore, Training coordinator, IVADO
Contact and Practical Information
For any inquiries, please contact us at formations@ivado.ca.
Schedule IVADO-Mila Summer School on Bias & Discrimination in AI
Confirmed Speakers
Program
Ravy Por
Moderator
9:00
Joelle Pineau
Opening remarks
9:15
Rachel Thomas
Keynote
10:30
Break
11:00
Tania Saba
Understanding bias & discrimination
11:40
Deborah Raji
The tech diversity problem
12:20
Lunch
13:30
Pedro Saleiro
Bias and Fairness in AI for Public Policy
14:45
Break
15:15
Pedro Saleiro
The Aequitas Toolkit: Case Studies & Tutorial
16:30
End of the day
Bibiana Pulido
Moderator
9:15
Emre Kiciman
Where does bias come from?
10:30
Break
11:00
Golnoosh Farnadi
Fairness definitions
11:40
François Laviolette
Bias in machine learning
12:20
Lunch
13:30
Petra Molnar
Automated decision system technologies & human rights
14:10
Noël Corriveau
Canada’s Algorithmic Impact Assessment Framework
14:45
Break
15:15
Noël Corriveau
Workshop
16:45
17:00
Cocktail
Golnoosh Farnadi
Moderator
9:15
10:30
Break
11:00
Emre Kiciman & Margaret Mitchell
Fairness-aware ML: practical challenges
12:20
Lunch
13:30
Behrouz Babaki
Learning subject to fairness constraints
14:10
Will Hamilton
Fairness constraints for graph embeddings
14:45
Break
15:15
Yasmeen Hitti & Andrea Jang
Tackling gender bias in text
16:00
Audrey Durand
AI for Health: Bias Alert
16:30
End of the day
Jannick Bouthillette
Moderator
9:15
Nicolas Vermeys
The legal perspective
10:30
Break
11:40
Nathalie de Marcellis-Warin
Montreal Declaration for responsible AI
12:20
Lunch
13:30
Cynthia Savard Saucier
The impact of bad design and how to fix it
14:10
RC Woodmass
Biulding inclusive teams
14:45
Break
15:20
Fernando Diaz & Luke Stark
Recommendations for the future
16:30
End of the day