A hot potato: Concerns about AI often revolve around issues such as misinformation or the potential for the technology to elude human control. However, an arguably more real concern today is how governments could employ AI and their institutional understanding (or lack thereof) of its flaws. For instance, the UK government seems to have embraced the technology at a pace that might be considered hasty and potentially unsafe.

The Guardian reports that multiple UK government institutions have started utilizing AI in ways that could significantly affect the daily lives of ordinary people. The technology now plays a role in various procedures, ranging from arrests and marriage license, to benefit payments.

The use of facial recognition systems by the police has been contentious even before AI became a widely-discussed trend. Critics have long warned its potential inaccuracy, especially when analyzing subjects with darker skin tones. Such inaccuracies have even led to wrongful detentions in the past. Despite being aware of these shortcomings, the London Metropolitan Police continue to employ facial recognition, making modifications that arguably impair the technology.

The National Physical Laboratory stated that the system typically maintains a low error rate under default settings. However, if the Metropolitan Police reduces its sensitivity – possibly in an effort to identify suspects faster – it results in more false positives. Consequently, the system's accuracy for Black people diminishes, becoming five times less precise compared to its accuracy for White individuals.

Furthermore, AI-based tools employed by the government to approve benefits and marriage licenses have shown a tendency to discriminate against applicants from certain countries. A parliament member highlighted numerous instances in recent years where benefits were inexplicably suspended, putting individuals on the brink of eviction and extreme poverty. The suspected underlying issue is a system used by the Department for Work and Pensions (DWP) for detecting benefits fraud, which partially relies on AI.

Even without substantial evidence pointing to fraud, this tool disproportionately flags Bulgarian nationals. The DWP insists the system doesn't consider nationality. Yet, they admit to not fully grasping the AI's inner workings, possess limited ability to inspect it for bias, and refrain from disclosing their findings, fearing that bad actors could game the system.

Similarly, the Home Office faces challenges with an AI-driven tool designed to identify sham marriages. While this system streamlines the approval process for marriage licenses, internal evaluations discovered a significant number of false positives, particularly concerning applicants from Greece, Albania, Bulgaria, and Romania.

There may be other oversights in the government's deployment of AI, but without transparent data from the relevant departments, it's hard to pinpoint them.

Misunderstandings regarding the limits of AI have caused serious incidents within other government and legal institutions. Earlier this year, a US lawyer tried to use ChatGPT to cite cases for a federal court filing, only to find that the chatbot had fabricated all of them. Such cases increasingly prove that the genuine risk of AI might stem less from the technology itself and more from human misuse.