On September 14th 2023, the panel ‘’Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South’’, organized by Data Privacy Brasil and REBRIP, took place during the 2023 WTO Public Forum, whose overall goal was to examine how trade can contribute to a greener and more sustainable future. Considering both the potentials and risks of Artificial Intelligence, as well as the prevalence of broader debates on AI governance and regulation, this panel’s goal was to discuss the concrete implications of particular trade clauses on the abilities of governments to regulate and require meaningful algorithmic accountability. 

A debate that has been gaining traction in Europe and the US, as parties to agreements that include such clauses, its potential impacts on South countries that are in the process of developing their own domestic legal frameworks to regulate AI and algorithmic are still unclear. The panel brought together Mariana Rielli, director of Data Privacy Brasil, Sofía Scasserra, Director of Observatorio de Impactos Sociales de IA, Transnational Institute, and Deborah James,  Director of International Programs, Center for Economic and Policy Research. It was moderated by Melanie Foley, Global Trade Watch Deputy Director. This blog compiles the notes from the panel, altered only for consistency and style, so that the main takeaways of this discussion can be shared widely.

Melanie Foley explained that Public Citizen is a U.S. consumer advocacy organization and a member of the Digital Trade Alliance, a network of more than a dozen organizations working for digital rights in the trade space. She opened by asking attendees in the packed room to raise their hands if they had ever used Chat GPT or a similar AI program. Nearly all did, so they are likely all aware, she explained, that algorithms don’t just decide the next song you’ll hear on a streaming service. They’re already a huge part of modern society and will only continue expanding into more and more facets of our lives. They’re increasingly making decisions about access to jobs and housing, health care, prison sentencing, educational opportunities, insurance rates and lending, deployment of police resources, and much more. 

However, software can be programmed to serve illicit purposes or can have unintended consequences. We are only beginning to understand the various harms that can occur due to the use of AI tools — which may range from replicating and exacerbating discrimination or biases to expropriating and misusing consumer data. It’s reasonable that governments may want to review these groundbreaking tools before they are released to an unsuspecting public. Despite this, certain provisions that are being proposed in some trade pacts, and that have already been agreed to in others, conflict with governments’ attempts to access and verify the source code used in various products to prevent racial and gender discrimination or other harms.

She then introduced Deborah James and asked her to give some specifics on the risky trade rules we’re here to discuss, and to explain where the tech industry has been most successful thus far in advancing its agenda.

Deborah James laid out a comprehensive overview of this particular debate.

AI, as James pointed out, relies on data, and Big Tech companies engage in extensive data collection, through surveillance, or by purchasing data or acquiring other companies. An algorithm, in its turn, tells the computer what to look for in the data, and what decisions to make based on what it finds. Those algorithms are based on the underlying source code.

The significance of regulating AI becomes abundantly clear in light of its potential to influence democracy, human rights, racial justice, labor rights, etc. Presently, many AI-related decisions remain hidden within a “Black Box,” granting tech companies the power to evade accountability for their algorithmic choices. This opacity raises concerns about the erosion of fundamental principles of democratic societies.

Furthermore, the tech sector stands as one of the most lucrative yet least regulated industries in history. Big Tech companies have consistently opposed regulation, both in the US and elsewhere, often raising objections that it would stifle innovation or harm consumers themselves. James drew attention to the fact that this strategy also applies to the intersection of AI and international trade agreements, highlighting that Big Tech has actively lobbied trade ministries to incorporate policies that protect their profit models. They call for the inclusion in trade agreements of provisions that would bar governments from being able to require the disclosure of source code and data in trade agreements. Big Tech seeks to use “trade” agreements because they are binding and they sideline democratic debates which are occurring worldwide about the role of these corporations in society.

Of particular concern is the attempt to enshrine secrecy regarding source code and data in these trade agreements, effectively curtailing governments’ ability to require meaningful algorithmic transparency and accountability. James further emphasized that these proposals are crafted to serve the interests of Big Tech corporations, both in developed and developing countries. They aim to convince the global South that these rules are beneficial and aimed at development, despite their clear alignment with corporate interests who have no mandate or expertise on development.

In response to the argument that proposed exceptions in these agreements would be enough, she highlighted that the history of trade tribunal indicates a propensity to prioritize commercial interests over public interest and human rights, casting doubt on the efficacy of relying solely on such exceptions. The reasons stated are threefold:  first, these exceptions are aimed at regulators and ex-post judicial enforcement; second, exceptions for many other social ills that are often a result of algorithmic bias, such as false information, emotional manipulation, and others raised by consumer advocacy organizations, do not appear in the text. Lastly, the exceptions contemplate, however insufficiently, only known risks of AI systems.

 As new risks and harms become known, it will be even more important for governments to maintain the power to regulate such algorithms to ensure that human and fundamental rights are upheld and that harms to society are reduced.

This would ultimately empower trade unions, civil society, and technical experts to scrutinize and assess algorithmic decisions, particularly in cases involving labor disputes and potential violations of human rights.

In summary, Deborah James stressed the pressing need for transparency and comprehensive regulation of AI, cautioning against trade agreements that prioritize the interests of Big Tech at the expense of accountability and democratic values.

In Latin America, there’s a noticeable shift in perspective towards the significance of regulating artificial intelligence (AI) and data flows, particularly considering the importance of ensuring diversity and representation in the data that fuels those systems. 

Sofía Scasserra provided a brief overview of several movements in the region and how those relate to trade. She pointed out the growing realization that, while fostering technological advancements is essential, it’s equally crucial to protect regional interests, which includes retaining control over data generated within the respective regions. 

At the same time, Scasserra pointed out that the region itself is very heterogeneous, including when it comes to the maturity of these debates and of actual enforceable regulation of data flows and digital rights, which is currently in place in some countries, like Brazil and Argentina, but not in others, like Paraguay and Bolivia. Still, concerns over not just regulating AI, but ‘’getting it right’’, so that it serves the interests of the region and its people, are already prominent. 

So, a one-size-fits-all approach won’t suffice, as there’s a pressing need for comprehensive policies that take into account the unique challenges and priorities of each region, which in its turn requires time. These policies should extend beyond privacy concerns to encompass broader issues like environmental impact, labor rights, and collective rights, connections that are already explicit across Latin America due to its history and sociolegal contexts. 

As an example, Scasserra mentioned how algorithms that dictated food delivery workers schedules required them to be constantly moving around cities in Colombia in order to be assigned gigs. That led to a number of unintended consequences, such as environmental damage and increased dangers and hazards for the workers themselves, who managed to effect change regarding these practices through organized protest. This, according to the expert, is one example of how the state might have proactively sought the details about the algorithm for prior evaluation, rather than reacting after significant harm had already occurred.

Then, Scasserra turned to the interplay between all these movements in Latin America and the (digital) trade agenda in the region, an area that is also highly heterogeneous. For instance, the Trans-Pacific Partnership (TPP) features an assertive digital chapter that restricts access to domestically produced data and forbids requests for accessing source code. 

Conversely, the Mercosur countries lack Free Trade Agreements (FTAs) with comprehensive digital trade provisions, as seen in the Pacific region, but this possibility seems to be gaining traction. In the event that these FTAs are signed, she argues, regulating AI to protect diverse Latin American societies will pose considerable difficulties.

Scasserra concluded by reminding the panel, once again, that Latin America has significant potential for crafting solutions to regional challenges, including regarding AI, but that in order for that to be realized and not constrained by opaque trade clauses, there needs to be policy space for regulation, as well as time to develop these regulations and foster local technological advancements that facilitate the region’s goals.

Mariana Rielli started by introducing Data Privacy Brasil, which, along with numerous other organizations, has been diligently working to bridge the gap between individuals concerned about digital rights and privacy and traditional trade-focused organizations. Brazil has established a robust civil rights framework for the Internet in 2014 and a data protection law in 2018, both of which were crafted with inputs from a diverse range of stakeholders. Within Brazil, there exists a vibrant digital rights community that is actively addressing the unequal impacts of datafication, both domestically and across regions.

However, there is a notable disconnect between these important democratic debates on digital rights and the (digital) trade agenda, which often operates opaquely and is challenging for Brazilian CSOs to access or comprehend. This dissonance brings out concerns for the harmony of Brazil’s own regulatory landscape, current and future. 

Brazil is currently in the process of regulating AI systems, a journey that has spanned the last two years. A committee of experts, supported by multi-stakeholder groups and extensive public engagement, has drafted a comprehensive framework for AI regulation. This legislation is not merely a copy-paste of European models but is firmly rooted in Brazil’s constitutional principles, especially data protection as a fundamental right, and adopts a rights-based approach with elements of risk to modulate the intensity of certain obligations and governance measures. 

As such, it provides for a number of algorithmic accountability measures, both in terms of ex-post enforcement and ex-ante requirements focused on active transparency, information, impact assessments, etc. 

With that context in mind, one significant concern lies in the provisions within trade agreements that restrict the disclosure of source code, including both algorithms and APIs. These restrictions can directly impact the ability to enforce algorithmic accountability measures outlined in AI regulation such as the one on the table in Brazil. While transparency and accountability do not always necessitate direct access to source code, the current breadth of trade provisions might even extend to the interface level.

While Brazil is not currently party to mega-regional trade agreements with source code protection provisions, there is growing pressure to join. Brazil has shifted its stance from a historically defensive position to align more closely with the digital trade agenda initiated by the United States and it remains to be seen how that position will play out in the future, as such a stance could be considered incompatible with its bold steps in regulating AI with multi-layered provisions. If Brazil were to sign one of these trade agreements, it could potentially hinder the full implementation of its ambitious AI regulations, creating a significant challenge in balancing digital rights and international trade interests.

Discussion 

Two questions from the floor led to the final discussion of the panel. The first was about the differences between a right-based and a risk-based approach to AI regulation. Both James and Rielli explored the question, highlighting that a focus on rights provides a baseline of protection and shifts the onus to AI systems providers to provide information to users and to be generally accountable beforehand, as opposed to relying solely on ex post enforcement. The second question was about the implications of those discussions, in general, to second-generation rights (social, economic and cultural).

In response, Scasserra reflected on how regulation is always ‘’late’’ and, for AI, the implications are still heavily unknown, so that these processes must pay attention not only to existing and potential impacts on individual rights, such as privacy, but also collective rights. Rielli concurred, and added that, despite common narratives, the impacts of AI are not distant, rather they are already very concrete, and many of them connected to second-generation rights, like housing, health, education, etc. James wrapped up the panel going back to her original point about the importance of the underlying data and who owns that, also highlighting the centrality of digital infrastructure and industrialization to counter the many asymmetries discussed throughout the panel. 

Veja também

Veja Também