AI, Bias, Riot Police and Recruitment

I recently read another article posing the choice as to whether AI being used in recruiting software is introducing or removing bias. This is a choice that I’ve seen written about, almost relentlessly, for the last few years and I’m going to be bold enough to go out on a limb and say that I actually know the answer. I’ve got this one folks. It’s doing both at the same time. We present the inevitable truth of that as an argument and there is no real argument. It’s doing both at the same time.

We like to think that ‘a thing does a thing’ – that there is healthy food and bad food. That our world can broadly be reduced, with clarity, to whether something is moving us towards or away from our goals. The world simply refuses to engage with us on those terms. In the article the author looks at several well known providers of recruitment solutions and highlights how incorrect or uncomfortable decisions can be reached. The author is absolutely correct that this is the case. However, the overall impact on fairness could be hugely different to the impact on individuals.

Do police in riot gear make you feel safe? There’s always a danger of crime, so riot police should feel like a good thing. They are there to protect the public like normal police and they are even better equipped and trained to deal with trouble. But my guess is that if you turned the corner and saw 500 riot police that the net positive impact of the above might be lost on you as an individual. You might feel uneasy. Similarly if you are misidentified as a criminal by those riot police then I’m guessing that the argument that the area is net safer probably isn’t a clincher for you. I’m guessing you are significantly more negatively impacted than before.

We know we have bias in recruitment when carried out only by humans. We know that we have a range of biases that go into recruitment decisions. We are immensely flawed and biased software. We get this stuff wrong. Therefore we need to accept the reality of the current solution – and it seems smarter to attempt to use software to do this than to correct for a combination of evolutionary and societal flaws every time we make a decision. It would be, in fact, probably the height of arrogance to believe that we could do so.

Software can probably make things better (overall), but the problem is that we are attracted to and sold ‘solutions’. And nobody likes a solution that doesn’t actually solve the problem. In this case the solution makes things a bit better overall – and possibly much better over time – but still has the same kind of flaws in it as when you started. And that is hugely problematic as Earl Weiner identified with a series of ‘laws‘ addressing the problem of automation in aviation.

I’ll pick out some and then leave you to go back to vendor selection.

17. Every device creates its own opportunity for human error.

18. Exotic devices create exotic problems.

19. Digital devices tune out small errors while creating opportunities for large errors.

20. Complacency? Don’t worry about it.

22. There is no simple solution out there waiting to be discovered, so don’t waste your time searching for it.

23. Invention is the mother of necessity

28. Any pilot who can be replaced by a computer should be.

25. Some problems have no solution. If you encounter one of these, you can always convene a committee to revise some checklist.

29. Whenever you solve a problem you usually create one. You can only hope that the one you created is less critical than the one you eliminated.