We wrote to the Home Office as part of our campaign ‘STOP SPYING ON ASYLUM SEEKERS’, opposing the draconian surveillance of asylum seekers taking place through the Aspen Card.
We asked detailed questions about how data collected from Aspen Card usage is used to monitor asylum seekers, and how the Home Office are alerted to any ‘breach of conditions’ of the card.
In their reply, the Home Office told us that:
“The Home Office can be alerted to a breach of conditions by several internal and external teams who analyse data or who are in contact direclty with asylum supported population.”
“Aspen usage is checked by applying a series of statistical filters to management information provided by the contractor, and interpreting the results, therefore involves manual intervention. Following a referral that a breach may have occurred, several actions are taken to establish if a breach has occured, one of which will be analysis of the Aspen card data of the individual concerned.”
Noting the collection of location data, the Freedom Of Information Act response stated that regular card transactions recorded outside of the authorised area of residence is seen as ‘unusual Aspen card activity’.
What this new information highlights is the increasing use of automation in decision making, despite the Home Office noting the involvement of manual intervention.
The way automation is being built into different stages of the decision-making process in the provision of social services is highly problematic.
We’ve observed tactics such as automated digital identity verification as well as automated eligibility assessments and so-called ‘fraud’ detection mechanisms.
Automating these processes while failing to build sufficient safeguards, including human intervention and review, has led to discrimination and unjust sanctions against people who are eligible for support. There are documented examples of how automation may disproportionally affect groups with protected characteristics and the lack of transparency in relation to what data is being used and how it informs life-changing decisions.
So ultimately, the Home Office have replied to our questions without really addressing our fundamental point – that asylum seekers are subject to automated decisions based on potentially spurious data.
The Home Office seems to think it is legitimate and proportionate to cut off someone’s basic subsistence simply because their systems indicated that someone regularly uses their Aspen Card away from where they live.
While the ‘Home Office’ refers to ‘manual intervention’ we are still far from clear whether a human ultimately makes a decision about cutting off the subsistence of another human – or do they jusy let the algorithms do their dirty work?