AI-Powered Border Control: The Hidden Cost of Predictive Travel Surveillance

· 2 min read

article picture

When Dutch citizen and human rights advocate Frank van der Linde passed through immigration at Amsterdam's Schiphol airport in March 2020, he had no idea his routine questioning was part of a covert surveillance operation. The immigration officer had been tipped off about his arrival through a system that collects detailed personal data about travelers - a practice that's becoming increasingly common worldwide.

This data, known as Passenger Name Records (PNR), creates an extensive digital trail including everything from phone numbers and credit card details to travel patterns and baggage information. While seemingly innocuous, this information is now being used by technology companies to develop algorithmic systems that could determine who gets to cross international borders.

"What do companies do with the data?" questions van der Linde, who discovered numerous errors in his own travel records. "If commercial companies help to analyze data that's incorrect, you could draw all kinds of conclusions."

At least four European companies - Idemia, SITA, Travizory, and WCC - are at the forefront of developing AI-powered travel surveillance systems. These systems promise to streamline border crossings for some while subjecting others to additional scrutiny based on automated risk assessments.

"Everybody should be able to go out of his own country and into any country and come back without having to queue in line and being able to use only his face," explains Renaud Irminger, co-CEO of Travizory. However, their system flags certain travelers for further evaluation based on factors like unusual travel patterns or matching certain behavioral profiles.

The algorithms powering these systems remain opaque. "They're kind of black boxes," admits Morten Jorgensen, Travizory's chief data scientist. "They will tell you that this person is potentially risky...but how it makes this decision is kind of a mystery."

This lack of transparency raises serious concerns among human rights advocates. "We have no idea if these systems are accurate, the extent of the data they're collecting, or the human harm," says Anna Bacciarelli of Human Rights Watch. None of the companies provide public information about redress for passengers unfairly targeted.

The implications extend beyond counter-terrorism. These systems are increasingly being integrated with immigration enforcement and could potentially prevent asylum seekers from boarding flights. Despite privacy protections in the EU, companies can still sell their software to countries with fewer regulations.

As surveillance expands to other forms of transport like trains and buses, travelers face growing algorithmic scrutiny of their movements. For people like van der Linde, who spent years uncovering the extent of his surveillance, the real-world impact can be profound: "I don't know if I'm still actively surveilled. I'm not surprised by anything anymore."

The race to implement predictive travel surveillance continues to accelerate, driven by profits and promises of enhanced security. But as these black box systems proliferate globally, fundamental questions about accuracy, accountability and human rights remain unanswered.