Weapons of Math Destruction – Predictions in reactive systems

I recently read the book “Weapons of Mass Destruction” (2016) by Cathy O’Neil [1, 2], who describes how the mass application of predictive algorithms transforms society and the involved risks using examples from the USA. For example, prediction algorithms for crime may use ethnicity (eg black vs white) leading to more police patrols in areas with high density of black population, resulting in more arrests of black people, which acts as a self-fulfilling prophecy. Using ethnicity as a predictor ignores the confounding with poverty, drug addiction etc, and, thereby, may implement racial discrimination on a large scale while at the same time these algorithms are not made explicit and may be able to be critized and are not accessible to the victims of these predictions. Another example is the transformation of the university system in the USA since the start of the university ranking in the “US News” newspaper, which lead to a tremendous increase of study fees, while unclear improvements of the “sold” academic degrees.

More generally, I see a risk of trying to optimize a system by using a single outcome, While this may work in simple systems, in complex systems where multiple outcomes are important and can be weighted and combined in an arbitrary way, the resulting predictions and recommendations may be very hard to justify and are often not sufficiently validated. Especially, reactive systems may adjust to the predictions of the algorithm and will adjust to the flawed model of the algorithm, eg using the example above giving more and more weight to ethnicity over time.

I think that complex systems may be improved in a better way by considering them as “emergent systems”, whose complex behaviour may result from simple behaviour rules for single agents. For example, crime rates may be decreased by complex crime prediction systems and mass arrests or by promoting solidarity in poor areas. (see also Jonas Eliasson, How to solve traffic jams [3] (by simple rules, not complex models)).

The described problems apply to all areas to which predictive algorithms can be applied (e.g. social, legal, biomedicine) and should be discussed intensely to protect society from the potential harms.

[1] https://weaponsofmathdestructionbook.com
[2] https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end
[3] https://www.ted.com/talks/jonas_eliasson_how_to_solve_traffic_jams/

Leave Comment

Your email address will not be published. Required fields are marked *