Removing the Bias from Security Screenings

It’s important that we as an industry, and as a society, are constantly working toward removing bias in how we treat each other. A quick glance at human history teaches us that this is a long and often painful road, but progress is being made. In our industry of physical security, we have a unique opportunity in how we approach this journey.

 

When it comes to security screening, bias can be a bi-directional phenomenon. When someone is asked to step aside for screening, it’s not unusual for the person being screened to believe some sort of bias is at play. Likewise, guards conducting the screening are often in an uncomfortable situation where they believe they have a legitimate reason for their activity, but the person being screened may not agree with them. 

 

We’ve seen it when entering an entertainment venue – men being called out of line before they enter a club to get searched by a handheld metal detector, or a female guard being called over to pat-down a woman. These situations are uncomfortable for everyone involved – the patrons, because of the physical invasiveness of manual screenings, and the guards, because of the potential accusations of bias that they could face when calling someone out to pat-down. 

 

The reality is, everyone has some kind of implicit bias, which could ultimately factor into their decision-making. In our industry of weapons screenings, that bias shows up when security personnel takes action based on someone’s gender, looks or how they are acting. 

 

But – that’s an issue technology can help solve. Modern weapons detection technology layered with artificial intelligence (AI) can inform security personnel of the need for a secondary, manual screening based on if a patron has a gun or other weapon on them. This way, security guards are only patting down patrons because the technology is telling them: “that person has a gun,” which dramatically reduces the need for secondary screenings and, perhaps more importantly, removes guards from the “judgment” process. 

 

How it Works

Implementing technology like this takes the burden off of the security guards because they are no longer searching people based on broad criteria (“you have metal somewhere on your body”). In this kind of ill-defined situation, the extent of the search can lead to indications of bias, either real or perceived by the person being screened. With modern weapons detection technology, they are searching people based on specific information identified by scanners and AI algorithms (“you have a gun in your right jacket pocket”).  

 

Widening the lens a bit, using this type of approach could spare the venue from bad publicity or legal liability, since guards are acting on specific information detected by an objective machine. And all the machine cares about is if you have a gun – it doesn’t care how you’re dressed, the color of your skin or even how you wear your hair. Guards are just following orders from the technology, and nothing else. 

 

Doing AI Right

It’s important to note that using and deploying AI can come with its own set of problems if the training datasets have implicit bias baked into them. Developers must ensure that they’re not accidentally teaching the AI to be biased – since AI only learns what it is taught by humans. Having a diverse team of developers can help with that, as can having processes in place for ensuring that training datasets do not contain bias. 

 

But if done right, using AI and humans together can not only improve the screening process, but also eliminate inequality and bias in threat detection. The kind of future where AI-based weapons detection treats all people the same is one that we should all strive for.  

 

As we celebrate and support diversity and inclusion around the world, let’s also look to the benefits that technology can provide to remove bias and exclusion.

Recent Posts