The AI Harm Library is a public repository that compiles, systematizes, and makes accessible concrete evidence of harms caused by artificial intelligence systems.

The initiative was developed to provide technical and political input to public hearings and legislative discussions on Bill 2338/2023, currently under debate in the Brazilian Chamber of Deputies.

Objectives

  • To systematize evidence of harmful uses of AI affecting social, economic, environmental, and political domains
  • To provide an accessible and visually engaging repository for civil society and, in particular, for federal legislators
  • To highlight how documented harms relate to regulatory categories already established or under debate in Brazilian legislation, such as transparency, fundamental rights, algorithmic auditing, liability, and copyright

Methodology

The AI Harm Library compiles concrete cases of negative impacts caused by AI systems, based exclusively on publicly available sources. Harm is defined as a documented, verifiable adverse effect affecting areas such as fundamental rights, labor, the environment, democracy, public safety, children and adolescents, or copyright.

For analytical purposes, cases were grouped into four main typologies:

  • Democratic Harms
  • Psychological and Social Harms
  • Socio-environmental and Economic Harms
  • Harms to Fundamental Rights

The platform presents the collected cases organized into harm categories, with descriptions, contexts, and links to the original sources.

The negative impacts of AI — such as discriminatory algorithms, privacy violations, automated disinformation, and the widening of inequalities — are no longer mere hypotheses, but documented effects that affect the daily lives of citizens and institutions. The library makes these cases accessible to civil society, researchers, decision-makers, and legislators, strengthening a more informed, fair, and inclusive regulatory debate.

Veja também

Veja Também