At the end of February we had the opportunity to attend RightsCon 2025 in Taipei, Taiwan, a conference that brought together activists, researchers, and policymakers to discuss the intersection of human rights and technology. In the following we want to name four highlights and observations of our time at RightsCon:

1. The Public AI Panel: A debate on alternatives

One of the highlights of our participation was hosting the session on “Public AI as an Infrastructure – democratizing the AI Tech Stack”, facilitated by Felix. The panelist Isabel Hou (Secretary General at Taiwan AI Academy Foundation), Antonio Zappulla (Chief Executive Officer at the Thomson Reuters Foundation), Alek Tarkowski (Director of Strategy at Open Future Foundation) and Teresa explored the core dependencies within the AI stack, including data, compute, and models. Currently, only a handful of companies such as Meta, OpenAI or Google control state-of-the-art AI models and the foundational infrastructure that underlies them. This concentration of power risks turning AI into a tool of exclusion that amplifies inequalities. Public AI, in contrast, presents an alternative: a vision for AI models and infrastructures that are accessible, transparent, and accountable to the public.

With data its paradoxical: public training datasets are often used in highly nonpublic ways, reinforcing power asymmetries between major tech companies and civil society. Addressing this issue is crucial for the development of a truly public AI infrastructure. A critical intervention discussed by Teresa was therefore the need for responsible public data commons: datasets that are curated, high quality, diverse, and representative of different global realities.

Felix Sieker, Antonio Zappulla, Isabel Hou, Alek Tarkowski, Teresa Staiger (left to right)

Felix Sieker, Antonio Zappulla, Isabel Hou, Alek Tarkowski, Teresa Staiger (left to right)

Beyond data, computational resources are another major bottleneck, as Alek highlighted. The dominance of a few tech companies in AI infrastructure raises concerns about technological sovereignty. Isabel also provided valuable insights into Taiwan’s proactive approach to AI workforce training, a crucial prerequisite for building public AI solutions. Antonio discussed how his foundation is addressing the knowledge gap in newsrooms, ensuring that journalists are equipped to critically engage with AI technologies and their implications, to counteract the dominant corporate narratives shaping AI policy and perception.

2. The material and environmental cost of AI

While AI is often perceived as immaterial, one session in particular, facilitated by Jasmin Walda (Heinrich Böll Stiftung) highlighted its tangible ecological and social costs. The resource consumption of AI extends beyond computational power to include raw material extraction, water and energy use, and e-waste disposal. Indigenous populations and people in the Global South are particularly affected by this, as they often live in regions rich in raw materials needed for technology production while having fewer opportunities to exert political influence.

The idea of “digital product passports” was discussed by Diego Marin (the European Environmental Bureau) as a possible solution to create more transparency about the environmental footprint and resource consumption of AI models.

3. Collective action and labor organizing

Another recurring point was the power of collective action: whether through tech worker unions, tech whistleblowers, Indigenous movements, or investigative journalism exposing concentrated power structures. AI researcher and co-founder of the Distributed AI Research Institute (DAIR) Timnit Gebru underscored the role of labor movements in pushing for change within major tech firms. Employees organizing within big tech companies have already influenced ethical AI policies, and their potential impact cannot be overstated.

Transparency remains a prerequisite for informed, critical engagement with AI, both in policy and within the companies that develop these systems. Some policy interventions are emerging as a promising tool as one example that stood out was the Silence No More Act in  California  that protects workers who speak out about discrimination and harassment, even if they’ve signed a nondisclosure agreement, a common practice in the tech industry. This example illustrated that regulatory pressure can serve as a counterbalance to unchecked corporate power.

4. Spotlight on activism and funding crisis

A key takeaway from the conference was the strong focus on activism. RightsCon 2025 successfully provided a platform for voices that are often sidelined in mainstream AI discussions. The representation of experts from the Global Majority was a welcome shift, offering perspectives that are too often ignored in Euro-American-centric AI debates.

However, the overall outlook was sobering. Discussions highlighted ongoing human rights violations related to AI technologies, from the surveillance of Uyghur populations (highlighted by Haiyuer Kuerban) to the deployment of facial recognition systems used in the West Bank or Facebook’s role in the genocides in Tigre or Myanmar (highlighted by Htaike Htaike Aung). Additionally, the increasing influence of right-wing tech governance, mainly in the US, painted a concerning picture for the whole field.

Another alarming insight from RightsCon 2025 is the sector’s massive funding crisis. The funding gap amounts to one billion US dollars, roughly one-third of the total funds. Many organizations dedicated to the common good, particularly those working at the intersection of technology and human rights, are facing existential challenges as a result. This crisis highlights the need for sustainable funding models to not only support public-interest AI initiatives but also strengthening civil society organizations at the forefront of the fight for digital rights.

Outlook

Our participation in RightsCon 2025, the impressive work of many activists and civil society organizations, the incredible variety of sessions, and, last but not least, the conversations we were able to have, have further reinforced our belief that we need digital public infrastructures and AI systems that are transparent, accountable, and committed to the common good – as a counterbalance to the growing power of large tech companies.

In our work at reframe[Tech], we will continue to explore these issues: Our upcoming studies on the Public AI Stack and more responsible foundation models will dive deeper into these questions and provide concrete recommendations for policymakers, researchers, and civil society actors.

If you’re interested in staying updated, be sure to contact Felix (felix.sieker@bertelsmann-stiftung.de) or Teresa (Teresa.staiger@bertelsmann-stiftung.de)


This text is licensed under a  Creative Commons Attribution 4.0 International License