Loading…
Attending this event?
AI Governance & Regulation clear filter
Friday, September 20
 

4:00pm EDT

Rethinking AI Governance: The Political Economy of the Digital Ecosystem
Friday September 20, 2024 4:00pm - 4:31pm EDT
Link to paper

Abstract:
The boom in AI governance initiatives rests on deeply flawed understandings of digital technology and its underlying political economy. This paper rejects prevailing conceptualizations of the AI governance problem and goes so far as to reject the label “AI” as a meaningful and useful name for the object of governance. What we now call “AI” is really a globally integrated digital ecosystem composed of computing devices, digital networks, digitized data, and software programs. The article’s theme is that what we now call “artificial intelligence” is not a new technology that creates its own distinctive governance problems, but outgrowths of computing and the ecosystem of technical standards, data, devices and networks that have grown up around it. From a public policy standpoint, “AI” is an unscientific, over-simplified label for evolving applications of computing. The applications we call AI are so numerous, so diverse, and so indistinguishable from computing as to render the concept of “AI governance” meaningless.

The claim that AI doesn’t exist may seem tendentious and exaggerated, but it has the virtue of clearing the deck for a more accurate understanding of the governance implications of the digital transformation. Once we stop obsessing about “AI” and focus attention on the broader digital ecosystem, the governance problems we face are clarified. “Governing” the production and use of intelligent applications requires systemic awareness of nearly all manifestations of computing. In other words, what most people mean by “AI governance” presumes comprehensive data governance, controls on the production and distribution of semiconductors and other devices, effective Internet governance, regulation of cloud providers/platforms, and regulation of the production and distribution of software and software architectures. Further, the policy and governance problems allegedly caused by “AI” predate LLMs and chatbots and have cropped up repeatedly during the longer-term history of computing and the Internet. “AI governance” is just digital governance.

Shifting our focus to the digital ecosystem also facilitates a more realistic assessment of the necessity and proportionality of regulatory interventions. It enhances awareness of the economic and social costs of ecosystem-wide restrictions, particularly regarding freedom of expression, open competition in ICT products and services, and the ability to explore and innovate new applications of computing. Further, when it is clear that that the object of governance is the entire digital ecosystem and not some new, isolated thing called “AI,” we are in a much better position to assess what measures would be effective and how much governance is feasible in a world where heterogeneous technologies and distributed decision making are rampant, states compete for power, and no single state has supreme authority over the entire ecosystem.

The paper proceeds along the following lines. Part 1 provides a basic definition and description of the digital ecosystem and its components and explains why that conceptualization works better than various alternatives. Part 2 traces the scientific origins of the digital ecosystem and shows that cybernetic control and automation via artificial intelligence or machine learning were known to be latent in computing technology from the 1940s. Part 3 tracks the evolution of intelligent applications to show empirically how “AI” progress was tied to progressive improvement in the capabilities of all four components of the digital ecosystem, and that every one of the problems attributed to “AI” arose during the evolution of the Internet and other forms of computing. Hence, no clear line can be drawn between the governance of AI applications and the governance of the broader digital ecosystem. Part 4 evaluates some of the current proposals to “govern AI,” demonstrating they generally attempt to have the tail of AI applications wag the dog of the entire digital political economy, often resulting in ideas that either lack feasibility and/or entail extraordinary centralizations of power that could backfire on their proponents.
Discussant
avatar for Chris Marsden

Chris Marsden

Monash University
Chris Marsden @prof_marsden is Professor of Artificial Intelligence, Technology and the Law, Director of the Digital Law Group at Monash, and Associate Director for Global Governance of the Data Futures Institute. He was Co-Director of the Warwick-Monash Alliance 'Brussels Eff... Read More →
Authors
avatar for Milton Mueller

Milton Mueller

Professor, Georgia Institute of Technology, Internet Governance Project
Milton Mueller is the O.G. of I.G. He directs the Internet Governance Project, a center for research and engagement on global Internet governance. Mueller's books Will the Internet Fragment? (Polity, 2017), Networks and States: The global politics of Internet governance (MIT Press... Read More →
Friday September 20, 2024 4:00pm - 4:31pm EDT
Room Y402 WCL, 4300 Nebraska Ave, Washington, DC

4:33pm EDT

AI governance: Compromising democracy or democratising AI?
Friday September 20, 2024 4:33pm - 5:03pm EDT
Link to paper

Abstract:
The increasing integration of artificial intelligence (AI) into society raises critical questions about its impact on democracy. The development of AI governance frameworks presents a convenient and crucial ground to strengthen democracy, particularly through the lens of participatory and deliberative theories. From this viewpoint, this article seeks to explore the extent to which emerging AI governance frameworks uphold participatory and deliberative democracy. Through a comparative analysis of proposed or existing legislation in the European Union (EU), Brazil and Canada, this study investigates the specific tools and mechanisms each framework uses to involve the citizens in AI governance. The analysis reveals that while all three jurisdictions emphasise ethical governance and assessment of AI along with regulatory dialogue and multi-stakeholder collaboration, this does not effectively extend to creation of robust and specific mechanisms to facilitate citizens’ participation and deliberation. Exhibiting a stronger commitment to participatory and deliberative democracy, the Brazilian framework demonstrates a firmer position, incorporating a wider range of individual rights and direct avenues for citizen input and engagement, e.g. mandatory public consultation in advance of algorithmic impact assessment. Conversely, the Canadian and EU approaches largely rely on existing institutions and processes, overlooking the unique challenges, i.e. knowledge barriers, economic and social injustices, expert rule, that cannot be fully resolved by toolset of ethics governance or through alternative venues such as citizens’ assemblies. Overall, this study concludes by advocating for the integration of novel mechanisms that can facilitate citizen participation and deliberation within AI governance frameworks, including specifically designed and institutionalised deliberative venues.
Authors
MU

Mehmet Unver

University of Hertfordshire
Discussants
avatar for Chris Marsden

Chris Marsden

Monash University
Chris Marsden @prof_marsden is Professor of Artificial Intelligence, Technology and the Law, Director of the Digital Law Group at Monash, and Associate Director for Global Governance of the Data Futures Institute. He was Co-Director of the Warwick-Monash Alliance 'Brussels Eff... Read More →
Friday September 20, 2024 4:33pm - 5:03pm EDT
Room Y402 WCL, 4300 Nebraska Ave, Washington, DC

5:05pm EDT

Aligned with the Blueprint for an AI Bill of Rights? An AI Transparency Evaluation of Company Privacy Notices and Explanations
Friday September 20, 2024 5:05pm - 5:35pm EDT
Link to paper

Abstract:
In its Blueprint for an AI Bill of Rights, the White House lists “notice and explanation” as one of five principles fundamental to protecting the American public as artificial intelligence (AI) is deployed. The Blueprint states “[y]ou should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.” In its description of the notice/explanation principle, The White House emphasizes the importance of plain language explanations about AI use. Furthermore, a company should describe how it plans to use AI systems, how the systems work, and explain any risks to consumers. While AI transparency is associated with the possibility of auditable and accountable algorithmic systems, more research is needed to develop best practices. This project assesses the extent to which notices from 40 companies align with The White House call for AI transparency.

This study adapts a data privacy transparency assessment model to assess the AI transparency of 40 companies. The sample: 10 social media, 10 eCommerce, and 10 brick-and-mortar companies, and 10 banks. AI transparency was assessed via qualitative content analysis of AI transparency materials (via privacy policies) from company websites. Building on previous studies addressing data privacy transparency, the assessment involved assigning full, half, or zero stars on ten AI transparency criteria, including: whether transparency materials are accessible via company websites, and presented in plain language - assessed by Flesch-Kincaid grade reading level analysis, whether references to applicable laws/regulations are provided, whether information about how AI systems work and connections between AI systems and company decision-making are explained, whether the risks of AI use are explained, whether companies disclose details about data retention policies, and data storage/processing, and whether company AI transparency materials are posted elsewhere online.

Findings suggest companies provide privacy-related components of AI transparency but have yet to start disclosing details about the use and implications of algorithmic, automated systems. The average score across the 40 companies is 2.95/10 stars, with the average across social media companies (3.35/10), which is higher than e-commerce (2.75/10) and brick-and-mortar companies (2.75/10), as well as the banks (2.95/10). YouTube/Google had the highest score across the sample with 4.5/10 stars, and Alibaba and Disney+ had the fewest stars with 1.5/10. Each company sampled provided access to privacy materials via its homepage. All companies also provided information about applicable laws/regulations. Most companies provided details about data storage/processing and about half describe data retention policy. Few provided details about how AI systems work, how AI systems link to company practices, or risks of AI use. Most companies provided some form of information about AI or machine learning on a site away from the privacy policy. To ensure the auditability and accountability of AI systems, companies are encouraged to improve upon these transparency efforts by better-aligning with the calls for AI transparency in The White House Blueprint for an AI Bill of Rights. Accessible and plain language notices are recommended, as is the inclusion of information about how AI systems work at each company, and the associated implications and risks of automated decision-making that may result from digital service use.
Authors
JO

Jonathan Obar

Assistant Professor, York University
MA

Motunrayo Akinyemi

York University
Discussants
avatar for Chris Marsden

Chris Marsden

Monash University
Chris Marsden @prof_marsden is Professor of Artificial Intelligence, Technology and the Law, Director of the Digital Law Group at Monash, and Associate Director for Global Governance of the Data Futures Institute. He was Co-Director of the Warwick-Monash Alliance 'Brussels Eff... Read More →
Friday September 20, 2024 5:05pm - 5:35pm EDT
Room Y402 WCL, 4300 Nebraska Ave, Washington, DC
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.