Consistent with the agenda developed under Brazil’s presidency of BRICS, including the BRICS AI Declaration, Lula’s remarks centered on the ethical and political dimensions of AI systems. He emphasized the risks of violence, exclusion, and structural inequality that may accompany the large-scale deployment of AI across society. His speech underscored the need for rights-based regulation and clearer obligations for companies developing systems that pose significant risks to fundamental rights. In Lula’s view, building a fairer digital economy requires robust global governance mechanisms and the strengthening of multilateral institutions, especially within the United Nations system. For Lula, “when a few control algorithms and digital infrastructure, it is not innovation but domination”. According to the Brazilian President, global governance of Artificial Intelligence assumes a strategic role.

This emphasis on social justice and global governance stands in contrast to the signals coming from Modi during the same period. In the Summit’s early days, the most visible developments were a series of announcements by major technology companies—including Google, Meta, Anthropic, OpenAI, and Microsoft—highlighting new investments in talent development and workforce training in the Global South, expansion of submarine cable infrastructure and data centers, applications tailored to small and medium-sized enterprises, and large-scale worker upskilling programs.

Big Techs are not the only players that appeared in the Summit. Conglomerate Reliance Industries and its telecom arm Jio announced that they will invest $109.8 billion over the next seven years to build artificial intelligence and data infrastructure; and the Adani Group also said it would invest $100 billion for renewable energy-powered AI data centres by 2035.

For Modi, the AI Summit appears to function as a platform for a new generation of AI-oriented industrial policies. These policies diverge from traditional development models based on national development banks, state-owned enterprises, and selective public investment in strategic sectors. Instead, they emphasize incentives for large knowledge-economy firms, ambitious workforce training targets, and the positioning of Big Tech companies as “essential infrastructure” from which new Indian services and business models can emerge.

Besides this industrial policy approach, the Indian government also emphasized the need of a transparent approach to AI safety, where safety rules are visible and verifiable, ensuring accountability and ethical business practices. Modi called this a “glassdoor” instead of a “blackbox”. For the Indian government, AI training must respect data sovereignty and be based on a trusted global data framework. In his remarks, Modi highlighted the principle of “garbage in, garbage out,” stressing that if data is not secure, balanced, and reliable, the output cannot be trustworthy.

At first glance, these approaches may appear contradictory or even mutually exclusive. Yet framing them as opposing paths risks oversimplification. Innovation in AI depends on a preexisting legal and institutional framework, and there is no evidence that protecting fundamental rights inherently stifles innovation or paralyzes productive sectors. On the contrary, predictable legal standards can provide the stability necessary for sustainable technological development.

Parallel events organized by civil society groups during the Summit have sought to reinforce this point. These organizations argue that countries in the Global South can advance governance frameworks that protect: (i) the rights of AI workers, from those who build, clean, and label datasets to highly trained machine learning engineers; (ii) the right to personal data protection; (iii) emerging claims around informational integrity; and (iv) community rights to sustainable living conditions, particularly in light of the water and energy demands associated with AI infrastructure.

As India prepares to carry forward the BRICS AI Declaration in 2025 and develop a new work plan for 2026 during its presidency, important questions remain. To date, the Indian government’s engagement has largely centered on major technology firms, with comparatively limited dialogue with civil society. From the perspective of Global South advocacy groups, three principal concerns stand out: (i) the narrow framing of “AI harms” as accidental or technical risks rather than structural or systemic issues; (ii) the use of slogans such as “AI for All” and “inclusive AI” without sufficient transparency regarding the scale of corporate incentives and the distributive effects of data center expansion; and (iii) the gradual sidelining of rights-based discourse in favor of a technocratic focus on AI Safety Institutes and technical standard-setting.

From a legal governance perspective, the Summit is expected to yield only non-binding declarations. Its primary impact therefore appears to lie less in formal regulatory outcomes and more in the economic weight of the actors involved, as well as in the political agreements surrounding infrastructure expansion and talent development. Whether the stated ambition of building a “fairer AI” will translate into effective global governance mechanisms remains an open question. As Brazil has argued, a plausible path forward would involve stronger coordination of existing institutions within the United Nations system.

Veja também

Veja Também