A major focus of Computer Weekly’s technology and ethics coverage in 2022 was on working conditions throughout the tech sector, from the issue of forced labour and slavery throughout technology supply chains, to UK Amazon workers staging spontaneous “wildcat” strikes in response to derisory pay rises and warehouse conditions.
Other stories in this vein included coverage of accusations that “soft union-busting” tactics were used by app-based food delivery firm Deliveroo to scupper its workers’ grassroots organising efforts, and the ongoing court case against five major tech firms for their alleged role in the maiming and deaths of people extracting raw materials in the Democratic Republic of Congo.
Artificial intelligence (AI) also featured heavily in Computer Weekly’s technology and ethics coverage in 2022, with stories published on the tech sector’s lacklustre commitment to “ethical” AI, as well as on the pitfalls and challenges of auditing AI-powered algorithms.
Police technology was another major focus of 2022, as policing bodies continue to push ahead with new tech deployments such as live facial recognition (LFR) despite serious concerns about its effectiveness, proportionality and efficacy.
Other stories focused on how technology is developed and deployed, and the underlying power dynamics at play. In January, for example, Computer Weekly spoke with Forensic Architecture about how it uses tech to challenge official state and corporate narratives around human rights abuses.
Computer Weekly also covered how technology is being deployed by the UK’s border control ecosystem to deter and punish migrants crossing the English Channel.
In January 2022, Computer Weekly spoke with Forensic Architecture (FA), an international, inter-disciplinary research agency that uses a range of digital technologies to investigate human rights violations committed by states and corporations around the world.
The conversation focused on how FA uses various digital technologies – including open source intelligence, 3D modelling, photogrammetry, virtual reality (VR), data mining, audio analysis and more – to investigate cases such as the pushback of migrants over the Greek border, and the killing of Mark Duggan by London police.
The story details how FA approaches using technology to analyse and challenge official accounts of state and corporate abuse. Its work has been used by a range of lawyers and human rights-focused non-governmental organisations (NGOs), and has been presented before United Nations panels.
In March, the UK government came under fire for spending tens of millions of pounds on border surveillance technologies to deter migrants from crossing the English Channel, rather than using those resources to provide safe passage.
To monitor the English Channel – a stretch of water only 21 miles long – the UK government uses a variety of advanced surveillance technologies, including unmanned drones and AI-powered satellites.
Lawyers, human rights groups and migrant support organisations, however, argued that while these technologies do have the capacity to protect people’s lives if used differently, they are currently deployed with the clear intention of deterring migrants from crossing – or helping to punish those that do.
Since the UK Supreme Court ruled in February 2021 that Uber drivers should be classified as workers rather than self-employed, working conditions throughout the gig economy firms have come under increased scrutiny.
In that time, many gig economy firms have been working to recalibrate their relationship with workers by looking to sign deals with larger unions.
However, in May 2022, app-based food delivery firm Deliveroo was accused of “soft union busting” after signing a deal with GMB, which smaller unions have condemned as a “hollow and cynical PR move” designed to scupper the grassroots, self-organising efforts of the company’s workers.
The Independent Workers’ Union of Great Britain (IWGB) said the “announcement is nothing more than a hollow and cynical PR move” aimed at putting the minds of investors and customers at ease, rather than delivering meaningful change for workers.
The union also said the agreement’s recognition of Deliveroo riders as “self-employed” rather than “workers” further undermines their organising efforts, as this employment status means they are not legally entitled to sick pay, holiday pay or the minimum wage.
In November 2021, a US district court judge dismissed a legal case against five major US technology companies accused by the families of dead or maimed child cobalt miners of knowingly benefiting from human rights abuses in the Democratic Republic of Congo (DRC), which Computer Weekly first reported on in late 2019.
In July 2022, however, presiding judge Carl J Nichols was found to hold significant stocks and shares in four of the five firms. The US Court of Appeals for the District of Columbia Circuit ordered that the motion to vacate Nichols’s decision be denied: “Neither Judge Nichols’s purchases of bonds issued by several appellees, nor his purchases of mutual funds or exchange-traded funds, resulted in violations” of Section 455, the section of the relevant US code that deals with the disqualification of judges.
Responding to the decision, Terrence Collingsworth – the lawyer representing Congolese families – added: “We are very disappointed in the Court of Appeals’ decision that Judge Nichols’s ownership of and major new investments in these companies (except Dell) was not a conflict of interest sufficient to vacate his decision dismissing the case.”
Despite the decision, the Court of Appeals is still separately looking into Nichols’s decision “on the merits” of the case (which victims always planned to appeal since the dismissal), meaning the initial dismissal could still be overturned.
In July 2022, Computer Weekly spoke with human rights researchers and digital supply chain management firms about how forced labour can be identified in technology supply chains, and the limits of the tech sector’s current approach.
They said that the identification of forced labour and slavery is no longer a technology problem for the IT sector, with a lack of government enforcement and corporate inaction being the major barriers to effective change.
Leo Bonanni, co-founder and CEO of supply chain transparency firm Sourcemap, said: “It’s been proven time and time again that even industries that have raw materials coming from some of the most remote parts of the planet can have real-time traceability on their goods from end to end – it’s just a matter of more widespread adoption and, I’m not going to lie, there does need to be a change in culture at a lot of companies.”
At the point of writing in August 2022, the Metropolitan Police Service (MPS) had deployed live facial recognition (LFR) technology six times throughout the year, marking the first set of deployments since February 2020, when use of the technology was paused as a result of the pandemic.
While roughly 144,366 people’s biometric information has been scanned over the course of these deployments, only eight were arrested, for offences including possession of Class A drugs with intent to supply, assaulting an emergency worker, failures to appear in court, and an unspecified traffic offence.
A wave of unofficial wildcat strikes swept across Amazon’s UK warehouses in August, with hundreds of workers across the country independently staging walkouts, sit-ins and work slowdowns in protest of derisory pay rises from the e-commerce giant.
Starting on 3 August 2022 with Amazon’s LCY2 warehouse in Essex, when 700 logistics workers spontaneously walked out after receiving a 35p pay rise offer, Amazon staff ended up staging wildcat strike actions in at least 10 Amazon facilities over similar offers from the company.
Speaking with Computer Weekly in October, ethical AI experts said the massive expansion of AI ethics has not necessarily led to better outcomes, or even a reduction in the technology’s potential to cause harm – despite a deluge of ethical AI principles, guidelines, frameworks and declarations being published by private organisations and government agencies around the world since 2018.
The emerging consensus from researchers, academics and practitioners is that, overall, such frameworks and principles have failed to fully account for the harms created by AI, because they have fundamentally misunderstood the social character of the technology, and how it both affects, and is affected by, wider political and economic currents.
During the inaugural International Algorithmic Auditing Conference, hosted in Barcelona on 8 November by algorithmic auditing firm Eticas, experts said organisations must conduct end-to-end audits that consider both the social and technical aspects of AI to fully understand the impacts of any given system.
They added, however, that a lack of understanding around how to conduct holistic audits and the limitations of the process is holding back progress.
The MPS removed more than 1,100 people from its controversial gangs violence matrix in November, and committed to a “complete redesign” of the system in the face of imminent legal action from human rights campaign group Liberty, which was brought on behalf of musician Awate Suleiman and community interest group Unjust UK in February.
In the week the case was due to appear in court, the MPS instead agreed to carry out a complete overhaul of the database and removed 65% of all those listed on the matrix.
On top of infringing human rights and data protection legislation, Liberty has long argued that the operation of the matrix breaches the right to private and family life, as the MPS’s broad sharing of personal data with a variety of third parties puts those affected at significantly greater risk of over-policing, school exclusion, eviction and, in some cases, deportation or welfare benefit denial.