AI is all the rage, and it has been for a few years. Some folks are raging advocates for AI, and they claim it will fix everything. Other folks rage against it, and they claim it will ruin everything. The truth probably lies somewhere between the two extremes, but Hilke Schellmann’s book The algorithm: How AI decides who gets hired, monitored, promoted, and fired and why we need to fight back now is likely to move you towards the rage against the AI extreme.
While generative AI is dominating most of our attention as it has been the most recent advance and it has been the most obvious one to most (at least for educators which is the group I hang out with most), predictive AI has a longer and— if you find [Schellmann’s case convincing, which I do—more insidious history.
Both generative AI and predictive AI leverage copious amounts of data and patterns it finds in them to (in the case of generative) create media that we find meaningful or (in the case of predictive) draw conclusions about what might happen in the future. The most familiar example, and one throughly treated in the book, is resume reviewing algorithms. Basically, our job applications are reviewed by AI and it predicts if we will succeed in the role.
It is reasoned the algorithms find patterns in the resumes of those who have succeeded in the role and marks the resumes with similar patterns as the best candidates. While the reasoning is sound, the practice is not. Of the many things that can go wrong, gender bias is perhaps the most obvious. If you train the predictive algorithm on the resumes of a group of workers dominated by males, then those with male dominated language will be preferred. “Baseball” will likely be preferred by the algorithm than “softball” for example.
Other models of employment screening tests that are analyzed by AI are also described by Schellmann. As you might expect, there are problems with those as well. Many of the tests are ostensibly only loosely connected to the skills one needs for the role, but we can’t really be sure as the algorithms are proprietary, thus not open to evaluation by researchers.
This book is “must reading” for increasing your awareness of the hidden and unevaluated and expensive algorithms that are used to make important decisions about each of us. Schellmann does have a suggestion for a non-profit organization charged with evaluating algorithms. Of course, businesses that make huge sums selling ineffective algorithms will push back hard against such efforts.
Finishing this book, homeopathy immediately came to mind. The algorithms do not seem to be effective for their intended purpose. They can be harmful as users believe they are getting a benefit, and the claims are not allowed to be independently evaluated.
The message for leaders seems clear. No matter the field in which your organization operates, avoid these tools. The message for individuals seems clear also. Beware these tools are affecting your life. Be skeptical of claims made by advocates and sabotage any attempts to implement them at work.