In addition to the buzz around the growing use and maturation of applied machine learning, SXSW also explored the darker side of the algorithm: can an algorithm discriminate? Can algorithms promote stereotypes?
Can an algorithm be racist?
Fusion Editor-In-Chief Alexis Madrigal and former Googlers Jacky Alcine and Sorelle Friedler explored this very idea in their talk “Biased Algorithms and the Future of Prejudice.” This talk explored how social biases can be taught to machine learning algorithms – intended or not. Examples ranged from the accidental (and frankly appalling) misclassification of images of people as gorillas due to dark skin, to automated job posting recommendation engines that unintentionally serve higher paying jobs to men over women. These social biases are often not caused by direct human intervention but by underlying biases in the data utilized to train those algorithms.
What does it mean for your business?
After SXSW ended, Microsoft had a high-profile folly in this space:
How Microsoft’s Tay Became a Genocidal, Foul-Mouthed, Sex-Crazed Nazi in One Day.
While this is one of the first and freshest examples of a discriminatory algorithm in the news, it certainly won’t be the last. It’s critical that as brands and companies begin to inject algorithmic design into software/apps/marketing campaigns, great care must be taken to minimize potentially unseen consequences – in particular with respect to ethical and social boundaries.
In the algorithm training process, data scientists often utilize model accuracy as the key success metric of model performance. The key take-away from this talk was that while an algorithm may be extremely accurate, it will never be perfect. In cases where the predictions are wrong (and perhaps more importantly, in certain circumstances when the predictions are right, but discriminatory nonetheless), social and ethical implications need to be considered and accounted for. Algorithms will never be able to make these ethical distinctions beyond the data on their own – this is the job of the human behind the machine.
This is especially true in healthcare settings where machine learning algorithms could be utilized to inform life/death decisions (such as predictions of 30-day hospital readmission, and potentially one day, automated diagnosis). At this point, the problem is still unsolved, and the ultimate solution isn’t yet clear.