UN WG on enterprise and human rights’ report on AI procurement — key findings and proposals — Tips on how to Crack a Nut – Cyber Tech
Final week, the UN working group on enterprise and human rights formally offered its thematic report on the procurement and deployment of synthetic intelligence techniques by States and companies (A/HRC/59/53, 14 Could 2025 — word there’s additionally an government abstract infographic).
The report focuses on actions to be taken to facilitate alignment of AI procurement and deployment with the UN’s Guiding Ideas on Enterprise and Human Rights and addresses organisations procuring quite than creating AI. The report approaches procurement in broad phrases by encompassing each private and non-private procurement, and by bearing in mind the place and duties of States, enterprise and stakeholders. The report incorporates a sequence of findings and proposals.
findings on the Regulatory panorama
One of many report’s key findings is that ‘States are more and more shifting from voluntary tips to binding laws on AI and human rights, resembling by means of the European Union AI Act and Council of Europe AI Conference. Nevertheless, there are vital gaps when it comes to rights-respecting procurement and deployment of AI techniques, together with a scarcity of a human rights-based method, no consensus on key definitions, inadequate integration of the angle of the International South, the supply of broad exceptions and restricted involvement of civil society. Additional, enforcement gaps and loopholes are weakening human rights protections in current laws on AI and human rights.’ This requires a more in-depth look.
The report highlights that ‘Globally, there are over 1,000 AI-related requirements and over 50 AI governance initiatives primarily based on ethics, accountability or security ideas’. Though unsurprising, I discover this fascinating and speaks to fragmentation and duplication of regulatory efforts that create a fancy panorama. Given the repeated recognition that AI challenges transcend borders and the requires worldwide collaboration (eg right here and right here), there’s clearly a spot nonetheless to be addressed.
In that regard, the report stresses that ‘The shortage of consensus on key ideas resembling “AI” and “ethics” is resulting in inconsistencies within the regulation of AI techniques and is especially problematic given the transnational nature of AI’, and highlights UNESCO’s Advice on the Ethics of Synthetic Intelligence because the form of doc that could possibly be used as a blueprint to advertise coverage coherence throughout jurisdictions.
Though the report identifies a current shift from voluntary tips to legally binding guidelines for AI techniques, such because the EU AI Act or the Council of Europe Framework Conference on AI, it additionally highlights that ‘there’s nonetheless uncertainty concerning the right way to handle sure loopholes within the EU AI Act’ and that the Framework Conference creates related challenges in relation to the numerous exemptions it incorporates, and the best way it offers signatory States discretion to set its scope of software. Though the report doesn’t take an specific place on this, I feel it takes a small step to conclude that legislative motion must be way more decisive if the problem of upholding human rights and basic values in AI deployment is to be met.
One other key discovering of the report is that ‘States are largely procuring and deploying AI techniques with out satisfactory safeguards, resembling conducting human rights influence assessments as a part of human rights due diligence (HRDD), resulting in human rights impacts throughout the general public sector, together with in relation to healthcare, social safety, monetary providers, taxation, and others.’ This outcomes from the restricted rising approaches to AI procurement.
Certainly, specializing in the regulation of AI public procurement, the report highlights a sequence of approaches to creating legally binding basic necessities for AI procurement and deployment, resembling in Korea, Chile, California, Lithuania or Rwanda, in addition to efforts in different jurisdictions to deal with particular elements of AI deployment. Nevertheless, the report additionally stresses that these regimes are likely to have exemptions in relation to probably the most controversial and doubtlessly dangerous areas for AI deployment (resembling defence and intelligence), and that the sensible implementation of these regimes nonetheless hinges on the restricted growth of generally understood requirements and guardrails and, crucially, on public sector digital abilities.
On the latter, the report clearly places it that ‘At the moment, there’s an imbalance in data and experience between States and the personal sector round what AI is, the way it works and what outcomes it produces. There may be additionally little house and time for procurers to have interaction critically with the claims made by AI distributors or suppliers, together with as they relate to potential and precise human rights impacts.’ Once more, that is unsurprising, however this renewed name for funding in capacity-building ought to make it abundantly clear that with inadequate state capability there may be no efficient regulation of AI procurement or deployment throughout the general public sector (as a result of, finally, as we now have just lately argued procurement is the infrastructure on which this regulatory method rests).
The report then covers intimately enterprise accountability in relation to AI procurement and deployment and covers problems with relevance even in contexts of light-touch self-regulation, resembling due diligence, contextual influence assessments, or stakeholder involvement. Equally, the report finds that ‘Companies are largerly procuring and deploying AI techniques with out conducting HRDD, risking opposed human rights impacts resembling biased resolution making, exploitative employee surveillance, or manipulation of client conduct.’
The ultimate a part of the report covers entry to treatments and, in one other of its key findings, stresses that ‘Courts are more and more recognizing the human rights-related issues of AI procurement and deployment, highlighting the pressing want for transparency and public disclosure for private and non-private sector procurement and deployment of AI techniques, and the truth that current treatment mechanisms lack assets and enforcement energy, leaving communities with out efficient recourse for AI-related human rights abuses. Stronger authorized frameworks, public reporting obligations, and impartial oversight our bodies are wanted to make sure transparency, accountability and redress.’
The report thus makes the first level that a lot elevated transparency on AI deployment is required, in order that current treatments may be successfully utilized by these affected and anxious. It additionally highlights how current treatments could also be inadequate and, specifically, new ‘mechanisms can even should be arrange, creating built-in approaches that acknowledge the intersectional nature of AI-related harms and their disproportionate influence on at-risk teams. Efficient redress for AI-related harms requires each sturdy institutional frameworks and deep understanding of how expertise intersects with current patterns of human rights violation and abuses, each of that are at the moment lacking’ (this largely chimes with my view that we’d like a devoted authority to supervise public sector AI use, and that preventative approaches should be explored given the dangers of mass harms arising from AI deployment).
suggestions
As a way to handle the unsatisfactory state of affairs doc within the report, the working group formulates a log checklist of suggestions to States, companies and different actors. Within the government abstract, the next are highlighted as key suggestions to States.
-
Set up sturdy authorized, regulatory and coverage frameworks on AI: Develop and implement AI rules following a human rights-based method which might be aligned with worldwide human rights regulation, guaranteeing transparency and accountability in AI procurement and deployment and authorized certainty for all.
-
Mandate HRDD: Require public disclosure, HRDD, and safeguards for AI techniques procured and deployed by personal and public sector actors, together with AI techniques utilized in high-risk sectors like regulation enforcement, migration administration, and social safety.
-
Prohibit Dangerous AI Programs: Ban AI applied sciences incompatible with human rights, like mass surveillance, distant real-time facial recognition, social scoring and predictive policing.
-
Guarantee Entry to Treatment: Strengthen judicial and non-judicial mechanisms to deal with AI-related human rights abuses, shifting the burden of proof to companies and authorities, and guaranteeing satisfactory assets.
-
Promote AI Governance Collaboration: Construct international cooperation to determine widespread AI requirements, fostering interoperability and guaranteeing the illustration of International South views.
Nevertheless, it’s value mentioning different suggestions included within the for much longer checklist within the report, as a few of them are straight related to the particular process of AI procurement. In that regard, the report additionally recommends that, with regard to AI procurement and deployment, States:
-
Present particular steerage to public sector procurement actors on a human-rights primarily based method to the procurement of AI techniques; together with particular limitations, steerage and safeguards for AI techniques procured and deployed in high-risk sectors and areas resembling justice, regulation enforcement, migration, border management, social safety and monetary providers, and in conflict-affected areas;
-
Present capacity-building for all stakeholders to grasp the technical and human rights dimensions of AI, and guarantee accessible, explainable and comprehensible details about the procurement and deployment of AI techniques, together with by mandating public registration of AI techniques deployed by each private and non-private entities;
-
Guarantee impartial oversight of AI techniques and require the supply of clear documentation on AI system capabilities, limitations and knowledge provenance;
-
Promote significant stakeholder session and participation in decision-making processes round AI procurement and deployment;
These suggestions will resonate with the maim necessities (in precept) relevant underneath eg the EU AI Act, or proposals for finest apply AI procurement.
Closing remark
The report helpfully highlights the present state of affairs within the regulation of AI procurement and deployment throughout the private and non-private sectors. The problems it raises are well-known and lots of of them contain advanced governance challenges, together with the necessity for ranges of public funding commensurate to the socio-technical challenges introduced by the digitalisation of the general public sector and key personal market providers.
The report additionally highlights that, within the absence of satisfactory regulatory interventions, States (and companies) are creating a major stack of AI deployments which might be merely not assured for related dangers and, consequently, are creating an put in base of doubtless problematic AI embeddings throughout the general public sector and enterprise. If something, I feel this ought to be a name for a renewed emphasis on slowing down AI adoption to permit for the event of the required governance devices.
