The role of private digital infrastructure providers in shaping the exercise of civil liberties in the digital sphere, and the role the law plays in facilitating this power, have been the subject of debate in recent years. Relatively less attention has been paid to the impact these ‘new governors’ have on the delivery of public services. As the State becomes increasingly dependent on privately provided AI systems, there is a real risk that public values (such as participation, transparency, and accountability) will be weakened. Historically, procurement rules have been used to ensure that public-private partnerships align to public objectives and values. Many lawyers, myself included, therefore surmise that when the State buys AI systems to assist with the delivery of public services, public procurement law will act as a constraint on the power granted to private operators by the arrangement.
In Responsibly Buying Artificial Intelligence: a ‘Regulatory Hallucination,’ Albert Sanchez-Graells clinically dispels such misplaced faith in procurement law, labelling it a ‘regulatory hallucination.’ Like AI hallucinations, this type of regulatory hallucination is ostensibly plausible but ultimately incorrect, leading to immediate tangible consequences (such as the mass harm that resulted from the Australian government’s wrongful demand that welfare recipients pay back benefits based on the Robodebt system). While Sanchez-Graells’ primary analytical focus is the UK, where under the National AI Strategy, public buyers are expected to ‘confidently and responsibly procure AI technologies for the benefit of citizens’, the logic of the argument applies also to other jurisdictions.
The main thrust of the article, and the book which further expands on the claims, is that the public buyer is badly placed to act as a public sector digital gatekeeper and self-regulator. According to UK Government’s AI policy, AI should conform to high level principles including fairness, accountability, contestability, and safety (amongst others). Responsible AI procurement therefore requires public buyers of AI to translate these substantive requirements into tractable contractual terms, which Sanchez-Graells terms AI ‘regulation by contract’. While there has been some positive experience in the UK of using procurement to achieve societal goals (such as environmental protection), this has not been an unmitigated success. More importantly, Sanchez-Graells illustrates how there are two assumptions underpinning the presumed effectiveness of regulation-by-contract that simply do not hold true in the digital context.
The first assumption is that AI regulation-by-contract can act as a two-sided gatekeeper disciplining the behaviour of both the tech provider and the public sector user of AI (for instance, a Welfare Department). However, as Sanchez-Graells illustrates, agency theory assumes the opposite: that a procurement arm of government acts as the agent of public buyers such as Welfare Departments, rather than as a constraint on them. A role reversal where the public buyer (the procurement arm) must act as gatekeeper of the public user (the Department) rather than its agent leads to internal governance challenges that procurement law is not equipped to resolve. If, for instance, the procurement arm is institutionally embedded within the organisation that will use the AI, it is unrealistic to think that the principal-agent relationship will be reversed to enable oversight. Furthermore, the ‘decentred interactions’ between the public sector AI user (the Department) and the tech provider may mean that they can jointly shape the effective deployment of AI systems in a way that escapes the influence of procurement law. This may be because of timing (procurement law primarily bites prior to the entry into force of contracts) and the tools available to public procurers (in terms of their technical expertise, for example).
The second assumption that the article challenges is that, where there is AI regulation-by-contract, the public sector acts as the rule-maker with the tech provider as a rule-taker. Sanchez-Graells emphasises how in the absence of detailed public guidance on how to implement AI principles the public buyer is funnelled towards private standards. Dependence on such private standards to substantiate fundamental rights has been criticised in the context of the EU AI Act. The risk is one of regulatory tunnelling, where decision-making power is displaced from the public buyer to the tech provider. The tech provider has the capacity to translate the contract’s requirements into technical and organisational measures based on industry standards (where they exist) or its own preferences (where they do not). Sanchez-Graells also points to the risk of industry shaping standards for commercial gain where regulatory goals are difficult to define or incommensurable.
Following this bleak assessment of the potential for procurement to shape the public use of AI systems, Sanchez-Graells makes the case for institutional reform and the creation of an independent regulator for public sector AI use. This regulator would prevent the public sector from deploying technological solutions that breach fundamental rights and digital regulation principles and would also be tasked with avoiding regulatory capture and commercial determination. To achieve these aims, it would require independence and digital capability. The new regulator would also set mandatory requirements for public sector digitalisation through standard certification and deployment authorisation.
Irrespective of the political feasibility of this institutional reform, through the preceding analysis Sanchez Graells leaves the reader in no doubt that lawyers concerned with public values and private power should be attentive to procurement law. Procurement law may well be the next legal framework to legitimise the expansion and entrenchment of private power in the digital environment, albeit this time at the direct expense of public power.