In this workshop, you’ll discover how critical AI incidents take shape from the inside. Learn how vast groups of people (including yourselves) are outliers in some data and can face negative consequences because of algorithmic bias. This workshop will teach you practical methods for measuring fairness and you’ll receive a structured project scoping framework you can use for building future ML/AI systems.
Takeaways:
-
- Understand how Fairness-related Harms happen in AI
- Identify outliers and detect if they represent marginalized people
- Discover how hypothesis testing can be used for government policy
- Learn how to leverage social and technical talent for more responsible AI systems
FWD50 Extras is a year-round series of events exclusively for annual conference ticket-holders.