Trump's Toolbox | Future Attribute Screening TechnologyCategory: thoughts
A 4 Minute Read
15 Nov 2016
Image by Michael Vadon
Since 9/11, the United States’ surveillance apparatus has drastically expanded across both the Bush and Obama administrations. Today, US authorities have access to a vast array of surveillance tools and an alphabet soup of intelligence operations. Indeed, the toolbox that’s available to the president is absolutely staggering. While we often acquiesce to this power (after all, we’ve never done anything wrong, so why would the government use those tools against us? We’re innocent!), the election of Donald Trump is a wake-up call that has just changed the playing game entirely. A man that burst into under-aged girls’ dressing-rooms, who threatened to lock up his political opponent, and who said that he would bring back torture now has access to the entirety of the US intelligence apparatus.
As a result of this incomprehensible madness, I have decided to begin a series I’m calling Trump’s Toolbox, the aim of which is to unpack and explore the various surveillance tools that the Trump administration has access to. Each post (of a frequency currently unknown) will address a surveillance apparatus available to Donald Trump and/or his administration. These won’t be new programs that are just opening up to the world, but instead a look back at things that some of us already know but many of us have either forgotten or never heard of. In any case, I hope to reframe these in light of a madman at the helm. Today, we’re looking at FAST.
Future Attribute Screening Technology
Is your body temperature too high because you have a fever? Perhaps you’re nervous about flying, and so your heartrate is above normal and your face ticks a bit. These are some of the problems that the Department of Homeland Security’s (DHS) Future Attribute Screening Technology (FAST) will have to come to terms with.
FAST is a DHS project that aims to analyze the body for indicators of mal-intent, which is defined as “the mental state of an individual intending to cause harm to [American] citizens or infrastructure”. This technology is supposedly “neutral” of “gender, culture and age”, and “uses non-contact sensors to remotely analyze physiological and behavioral cues including, eye movement, body movements and other factors that an individual typically does not consciously control.” Those ‘other factors’ include, as I alluded to in my introduction, body temperature and heart rate, all of which are recorded wirelessly.
Creeped out yet? You should be.
The DHS itself boasts 70-74 percent accuracy in identifying malintent, a number that sounds acceptable when you first hear it until you realize how many millions of individuals pass through an airport every day, resulting in what would be innumerable false positives. Yet, imagine for a second that it had a 99% accuracy rate. Alexander Furnas at the Atlantic describes why even this would be an abysmal margin of success:
Predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations – an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.
Given these numbers, what would be the realistic outcome of such a technology if the Trump administration were to deploy it in airports, for example? Likely it would involve vast amounts of people being ‘scientifically’ flagged by FAST as warranting further interrogation. However, as soon as the individual is flagged, all the same biases that plague the current system would be reintroduced as certain individuals face more scrutiny than others by security agents. Any protest in this situation would likely be countered with “our neutral and scientific system flagged you as potentially harbouring malintent”. Indeed, it would be a veneer of science and objectivity that would (attempt to) mitigate any allegations of racial profiling or discrimination.
Nevertheless, FAST is unsurprisingly being sold as making it easier for travelers, as “FAST is intentionally designed to minimize the impact on subjects being screened by reducing or eliminating the inconvenience of current security practices (e.g., shoe removal, pat-downs, random checks).” The system is also supposedly private: “FAST does not connect physiological data to an individual, nor does it permanently store collected data once the analysis is complete.”
Likely, the best case scenario for this technology is that it gets introduced to airports and doesn’t spread any further. On the other hand, if the public decides it doesn’t mind the tech and in fact enjoys the ‘[minimized impact] on subjects’ then there’s no reason it can’t spread to any number of public events, whether it’s the Superbowl or a political rally. If that happens, you can’t just abandon your phone, close your Twitter account, or encrypt your emails. Short of locking yourself at home, there’s no opting out.
Let me know what you think I should explore next in the comments below!