BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IVADO - ECPv6.5.0//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IVADO
X-ORIGINAL-URL:https://vieux.ivado.ca/en/
X-WR-CALDESC:Events for IVADO
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20190603T080000
DTEND;TZID=America/New_York:20190606T170000
DTSTAMP:20260413T052254
CREATED:20190311T161537Z
LAST-MODIFIED:20200127T162140Z
UID:11539-1559548800-1559840400@vieux.ivado.ca
SUMMARY:IVADO-Mila Summer School on Bias in AI
DESCRIPTION:Algorithms\, and the data they process\, play an increasingly important role in decisions with significant consequences for human welfare. While algorithmic decision-making processes have the potential to lead to fairer and more objective decisions\, emerging research suggests that they can also lead to unequal and unfair treatments and outcomes for certain groups or individuals. \nThis Summer School is an attempt to engage multi-disciplinary teams of researchers and practitioners to explore the social and technical dimensions of bias\, discrimination and fairness in machine learning and algorithm design. The course focuses specifically (although not exclusively) on gender\, race and socioeconomic based bias and data-driven predictive models leading to decisions. \nA basic understanding of machine learning is strongly recommended. \nObjectives\n\nUnderstanding bias and discrimination\nExploring the harms from bias in machine learning (discriminatory effects of algorithmic decision-making) \nIdentifying the sources of bias and discrimination in machine learning \nMitigating bias in machine learning (strategies for addressing bias)\nRecommendations to guide the ethical development and evaluation of algorithms\n\nConfirmed Speakers\n\nBehrouz Babaki\, IVADO Post-Doc at Polytechnique Montreal\nFernando Diaz\, Principal Researcher and lead of the Montreal FATE Research Group at Microsoft\nMarc-Antoine Dilhac\, Professor of Philosophy at the Université de Montréal\, Chair Holder – Canada Research Chair in Public Ethics and Political Theory\, Leader of the Montreal Declaration Responsible AI Development Committee\nLuc Faucher\, Professor of Philosophy at the Université du Québec à Montréal\nGolnoosh Farnadi\, IVADO Post-Doc at Polytechnique Montreal\nMoritz Hardt\, Assistant Professor\, Department of Electrical Engineering and Computer Sciences at the University of California\, Berkeley\nEmre Kiciman\, Principal Researcher at Microsoft Research AI\nMargaret Mitchell\, Senior Research Scientist\, Google Research and Machine Intelligence\nPetra Molnar\, Lawyer and Researcher\, International Human Rights Program\, University of Toronto Faculty of Law\nDeborah Raji\, Engineering Science Student at the University of Toronto\nTania Saba\, Professor of Industrial Relations at Université de Montréal\, Chair Holder – BMO Diversity and Governance\, Université de Montréal\nPedro Saleiro\, Post-doc\, Aequitas\, Center for Data Science and Public Policy\, University of Chicago\nCynthia Savard Saucier\, Director UX at Shopify and author of Author of Tragic Design (O’Reilly)\nNicolas Vermeys\, Professor of Law at the Université de Montréal and Assistant Director of the Cyberjustice Laboratory\nRC Woodmass\, Founder of Queerit and Product Designer at Crescendo\nResearch Interns at Mila and co-founders of Biasly AI: Andrea Eunbee Jang\, Yasmeen Hitti and Carolyne Pelletier\n\nPreliminary Program\nDay 1: Bias and inclusion in AI\n\nOpening remarks\nKeynote and Introduction\nUnderstanding bias and discrimination\nAI and social justice\nThe tech diversity problem\nIntroduction to Machine Learning and automated decision-making (optional)\n\nDay 2: Fairness in pre-processing\n\nIntroduction\nBloc 1: Fairness in pre-processing (fairness in data)\n\nTechnical session\nCase studies\nHands-on\n\n\n\nDay 3: Fairness in processing and post-processing\n\nBloc 2: Fairness in processing/Fairness in design\n\nTechnical session\nCase studies\n\n\nBloc 3: Fairness in post-processing/Fairness in decision\n\nTechnical session\nCase studies\n\n\n\nDay 4 – Governance\, recommendations and future directions\n\nCommunity-driven initiatives:\n\nMontreal Declaration for a Responsible Development of Artificial Intelligence\nOther initiatives (TBD)\n\n\nBias in the private sector: Challenges and recommendations\nBias in the public sector: Challenges and recommendations\nClosing Panel discussion\n\nRegistration\nRegistration will open on March 11th at 12h00. \n\nGovernment and industry employees: $1500\nNon-profit organization (NPO): $400\nStudent: $200\n\nIVADO also offers fee waivers to eligible students\, which covers the registration fee for the Summer School. To be eligible\, please fill the following form available in English or French before March 29\, 2019. \nOrganizing Committee\n\nJihane Lamouri\, Diversity coordinator\, IVADO\nGolnoosh Farnadi\, Postdoc researcher\, IVADO\nMartin Gibert\, Ethics researcher\, IVADO\nBrian Moore\, Training coordinator\, IVADO\n\nContact\nFor any inquiries\, please contact us at formations@ivado.ca. \nRegistration >>
URL:https://vieux.ivado.ca/en/events/ivado-mila-summer-school-on-bias-in-ai/
ATTACH;FMTTYPE=image/png:https://vieux.ivado.ca/wp-content/uploads/2019/03/BiasSummerSchoolEventbrite.png
END:VEVENT
END:VCALENDAR