UK government is now using AI to make life-changing decisions for its citizens

Daniel Sims

Posts: 1,378   +43
Staff
A hot potato: Concerns about AI often revolve around issues such as misinformation or the potential for the technology to elude human control. However, an arguably more real concern today is how governments could employ AI and their institutional understanding (or lack thereof) of its flaws. For instance, the UK government seems to have embraced the technology at a pace that might be considered hasty and potentially unsafe.

The Guardian reports that multiple UK government institutions have started utilizing AI in ways that could significantly affect the daily lives of ordinary people. The technology now plays a role in various procedures, ranging from arrests and marriage license, to benefit payments.

The use of facial recognition systems by the police has been contentious even before AI became a widely-discussed trend. Critics have long warned its potential inaccuracy, especially when analyzing subjects with darker skin tones. Such inaccuracies have even led to wrongful detentions in the past. Despite being aware of these shortcomings, the London Metropolitan Police continue to employ facial recognition, making modifications that arguably impair the technology.

The National Physical Laboratory stated that the system typically maintains a low error rate under default settings. However, if the Metropolitan Police reduces its sensitivity – possibly in an effort to identify suspects faster – it results in more false positives. Consequently, the system's accuracy for Black people diminishes, becoming five times less precise compared to its accuracy for White individuals.

Furthermore, AI-based tools employed by the government to approve benefits and marriage licenses have shown a tendency to discriminate against applicants from certain countries. A parliament member highlighted numerous instances in recent years where benefits were inexplicably suspended, putting individuals on the brink of eviction and extreme poverty. The suspected underlying issue is a system used by the Department for Work and Pensions (DWP) for detecting benefits fraud, which partially relies on AI.

Even without substantial evidence pointing to fraud, this tool disproportionately flags Bulgarian nationals. The DWP insists the system doesn't consider nationality. Yet, they admit to not fully grasping the AI's inner workings, possess limited ability to inspect it for bias, and refrain from disclosing their findings, fearing that bad actors could game the system.

Similarly, the Home Office faces challenges with an AI-driven tool designed to identify sham marriages. While this system streamlines the approval process for marriage licenses, internal evaluations discovered a significant number of false positives, particularly concerning applicants from Greece, Albania, Bulgaria, and Romania.

There may be other oversights in the government's deployment of AI, but without transparent data from the relevant departments, it's hard to pinpoint them.

Misunderstandings regarding the limits of AI have caused serious incidents within other government and legal institutions. Earlier this year, a US lawyer tried to use ChatGPT to cite cases for a federal court filing, only to find that the chatbot had fabricated all of them. Such cases increasingly prove that the genuine risk of AI might stem less from the technology itself and more from human misuse.

Permalink to story.

 
This further highlights 2 things.

1. the UK has serious bureaucratical issues. ANY problem with benefits or government programs requires mountains of paperwork, back and forth, and arguments with councils and courts, often with no resolution. It's a nightmare. And since the K averages about as wealthy as Missouri, everyone needs every pound just to scrap by their bizarre economical situation. Throwing AI into this mix is just bound to cause massive issues, the only thing that saves us from bureaucracy is its inefficiency, that flies out the window with AI.

2. Thank god I'm an American with the 5th amendment. Such uses of AI to identify criminals has already been made illegal in several US cities and has by and large been seen as far too risky to deploy en masse. Of course, the UK police are too busy arresting people for mean tweets to handle things like knife attacks, so who known how much damage this will actually do.
 
If AI is to be given so much power, a direct chain of liability needs to be established so when someone is harmed by AI, the "owners" can be held responsible. Those laws should also be given minimum damage amounts so those harmed can be "made whole" again and rules / methods established to determine when the AI must be abandoned.
 
My take from this is if I'm going to the UK don't do the following:

1 - don't get a marriage license
2 - don't be black
 
I didn't know Bulgarians were black.
Point 2 I made was about this in the story, you must have missed it:

The use of facial recognition systems by the police has been contentious even before AI became a widely-discussed trend. Critics have long warned its potential inaccuracy, especially when analyzing subjects with darker skin tones. Such inaccuracies have even led to wrongful detentions in the past. Despite being aware of these shortcomings, the London Metropolitan Police continue to employ facial recognition, making modifications that arguably impair the technology.

The National Physical Laboratory stated that the system typically maintains a low error rate under default settings. However, if the Metropolitan Police reduces its sensitivity – possibly in an effort to identify suspects faster – it results in more false positives. Consequently, the system's accuracy for Black people diminishes, becoming five times less precise compared to its accuracy for White individuals.
 
It's if then that script; it is not self aware and therefore not remotely intelligent; let alone capable of becoming so.
 
Last edited:
Back