×

AI

2030

Our generation’s platform for global AI action by 2030.

Our world is changing fast and fundamentally. What will it look like in one year? Five years? Ten? The answer will depend greatly on the trajectory of artificial intelligence.
Recent breakthroughs are transforming social, political, and economic realities, but they have not come without unanticipated risks. Today, AI is amplifying bias, distorting elections, and disrupting jobs. As AI grows more and more capable, harms will only proliferate. Our shared future is more precarious now than ever before. It is young people who have the most to gain — and the most to lose.
Encode Justice, with one thousand members all under the age of 25, represents millions of young people across the globe. We are supported by many more allies over the age of 25. Together, we believe AI innovation has enormous potential to advance human prosperity — but as of now, our world is on the wrong path. Today’s leaders must ensure that our generation inherits a livable future by immediately establishing guardrails for AI that protect all of our lives, rights, and livelihoods.

< Our Signatories >

Today’s Leaders

Mary Robinson

First woman President of Ireland, former United Nations High Commissioner for Human Rights

Yoshua Bengio

ACM Turing Award Winner, Professor at Université de Montréal, Founder and Scientific Director of Mila – Quebec AI Institute

Joseph Gordon-Levitt

Actor

Margaret Mitchell

Chief AI Ethics Scientist at Hugging Face

Audrey Tang

Taiwan’s first Digital Minister

Tomorrow’s Leaders

Sneha Revanur, 19

AI 2030 Author, Encode Justice

Sunny Gandhi, 22

AI 2030 Author, Encode Justice

Luke Drago, 22

AI 2030 Author, Encode Justice

Adam Billen, 22

AI 2030 Author, Encode Justice

Lysander Mawby, 22

AI 2030 Author, Encode Justice

The coming reckoning on AI is a moral one, and it will shape human history. To bend the arc towards truth, safety, and justice by 2030, we call on world leaders to act now to:

1. BUILD TRUST AND HUMAN CONNECTION.
2. PROTECT OUR FUNDAMENTAL RIGHTS AND FREEDOMS.
3. SECURE OUR ECONOMIC FUTURE.
4. BAN FULLY AUTOMATED WEAPONS OF DESTRUCTION.
5. COOPERATE FOR A SAFER TODAY AND TOMORROW.
 

< Our Calls to Action >

1. Build trust and human connection.

We write this message in 2024, the biggest election year in history. The governments of over four billion people — half of humanity — are up for election, and citizens are preparing to march to the polls under a fog of disinformation. Two days before Slovakia’s most recent parliamentary election, a fake audio recording apparently exposing a major party leader’s plan to buy votes spread like wildfire on social media. The recording could not be debunked in time, and that party lost. AI can now act as a propaganda machine, with outputs so readily generated, disturbingly convincing, and highly personalized that their circulation is impossible to tame through existing means. 2024 is only the canary in the coal mine: if AI continues to hijack reality, our generation’s trust in information and institutions could soon collapse.
It’s not just politics. All across the Internet, AI-generated non-consensual pornography has exploded. How can we feel safe at a time when strangers can produce and distribute synthetic pornographic images that attempt to defile our personhood?
Eventually, we may even witness the total degradation of the collective bonds that sustain societies. Amidst an epidemic of loneliness, young people are turning to chatbots in lieu of friends, family, and mental health professionals — whether they’re looking for advice in a crisis, seeking companionship, or just passing time. We want to live in a world where people enjoy meaningful relationships with other people — not one where that role is passed onto machines.
Without intervention, the lines between real and artificial, between human- and machine-generated, and between truth and deception will completely blur — and it is our generation that will suffer the most.

< Our Calls to Action >

  • GOVERNMENTS SHOULD  mandate clear and continuous disclosures of AI-generated political advertising. 
  • COMPANIES SHOULD identify and limit the spread of deepfake content — especially content that is non-consensually pornographic, libelous, or that represents itself as real — and be held responsible if preventive measures are easily bypassed. 
  • GOVERNMENTS MUST protect free speech and opportunities for anonymity. We reject the notion that countering AI-generated disinformation necessarily injures individual liberties; regulations should be narrowly scoped to avoid infringing on protected expression. Because online anonymity is a core right, the burden should be on companies to label the synthetic content their products generate — not on people to verify or establish their human identity.
  • COMPANIES SHOULD label AI outputs with a well-established warning symbol or label and explicitly disclose the model of origin if content has been created, altered, or manipulated in a significant way. 
  • COMPANIES SHOULD ensure that AI systems present clear and continuous indicators that users are interacting with a machine, not a human. Anthropomorphic AI is uniquely capable of exploiting our attention, trust, and emotions. 
  • COMPANIES SHOULD allow users to opt out of being subjected to an embedded AI system, like a content targeting algorithm or chatbot. We should retain choice in our interactions with AI, including the choice not to interact at all.
  • COMPANIES SHOULD offer users agency and ownership over their personal data, whether this data collection is intended for future model training or other monetization efforts like ad targeting. 
  • GOVERNMENTS SHOULD fund public education programs that train people to make thoughtful choices about their interactions with AI. These AI literacy programs should emphasize strategies to verify authenticity in the era of AI-enabled voice cloning and deepfakes, as well as prepare people to critically assess online content.
  • PUBLIC AND PRIVATE ACTORS SHOULD make a large-scale investment in research and technical solutions, built from the ground up, to defend trust and truth. We are approaching a future where humanity’s shared sense of reality is irreparably warped. Today’s technology is still insufficient as an antidote; we need to modernize our arsenal of truth-seeking.
 

< Our Calls to Action >

2. Protect our fundamental rights and freedoms.

AI is guiding high-stakes decisions in hiring, education, criminal justice, and more. But these often-opaque systems have further entrenched discrimination. In the United Kingdom, students flooded the streets after an exam score prediction algorithm used in university admissions amidst COVID-19 was found to penalize schools in high-poverty communities. In the United States, Black communities have seen a string of wrongful arrests driven by faulty facial recognition technology, which disproportionately fails on darker-skinned faces.
But even if AI systems were technically unbiased, the structural context of deployment could still produce harmful outcomes. For instance, law enforcement agencies often use AI in ways that mirror existing patterns of overpolicing. In the hands of authoritarian regimes, AI can be used to surveil populations and chill free speech. Similar technologies are already entering school campuses. It is not enough for AI to be fair — we must reimagine how, where, and on whom we choose to use it.
Our generation is coming of age in a world where seemingly neutral AI is supercharging surveillance and discrimination, with little accountability for its failures. Whether we’re standing trial, seeking healthcare, or applying for a job, this will reshape how we experience both public and private life.

< Our Calls to Action >

  • GOVERNMENTS SHOULD mandate publicly available impact assessments for rights-impacting AI systems. These assessments should accompany independent audits that evaluate both development and deployment, and should measure fairness in both treatment and outcomes across sensitive characteristics like race, gender, and socioeconomic status. Impact assessment frameworks should encourage developers to seek out less discriminatory alternatives, including non-AI alternatives. 
  • PUBLIC AND PRIVATE ACTORS SHOULD actively monitor the real-world impact of AI systems once deployed and intervene when evidence of harm emerges by modifying or suspending the tool in question.
  • GOVERNMENTS MUST empower us to seek meaningful redress when such a system is shown to have violated our rights, whether through action in a court of law or a complaint to a public agency.
  • GOVERNMENTS SHOULD fund technical research on bias detection and mitigation. AI should be equitable by design; a reactive approach is not enough. Public research grants can incentivize partnerships between academia, industry, and government that advance equitable AI.
 

< Our Calls to Action >

3. Secure our economic future.

The future of work for our generation hangs under a cloud of uncertainty. In the long run, AI may create jobs and help renew our sense of purpose — or it may send us hurtling towards a post-work reality for which societies are entirely unprepared. This is a treacherous gamble. As AI begins to match and even eclipse human performance across a range of tasks, young people look ahead with anxiety. How will we continue generating economic value, and what will become of our livelihoods? A student today cannot be sure that their dream job will still exist when they enter the workforce. Even if fears of the worst turn out to be overblown, we are undeniably on the brink of far-reaching economic upheaval.
In the near future, we will see inequality rise as AI produces winner-takes-all economic dynamics and further concentrates market power. A landmark Massachusetts Institute of Technology study found that since 1980, automation has been the primary driver of income inequality in the United States, accounting for more than 50% of the increase in the wage gap between more and less-educated workers. We can expect labor markets in the Global South to be hardest hit. In every country, low-wage workers may be the most gravely affected, though white-collar workers, too, are at serious risk of displacement
AI advancement could send shock waves that obstruct pathways out of poverty in the Global South, disrupt economic systems, and disempower human workers globally. If we instead aspire to a world in which AI drives economic gains for all and unlocks time for the activities we find most meaningful — while uplifting, rather than superseding, humans — leaders must intervene.

< Our Calls to Action >

  • GOVERNMENTS SHOULD redirect R&D investment to AI applications whose fundamental purpose is to enhance, not supplant, human capabilities and potential over time. These include AI-powered tutoring systems and AI-powered solutions in precision medicine designed to meet our needs on a deeply individual level, serving to improve human creativity, cognition, dynamism, and health at large. We must focus less on boundlessly maximizing AI capabilities — with human disempowerment as a dangerous potential byproduct — and more on using AI to raise the ceiling of our own capabilities.
  • GOVERNMENTS SHOULD form a global retraining fund. This global fund should funnel investment to upskilling and talent development programs, with a particular focus on the Global South, addressing the more immediate distributional effects of automation and boosting less-educated workers.
  • GOVERNMENTS SHOULD be willing to consider bold, innovative policy ideas if we arrive at economic conditions that necessitate a more dramatic response. Possible solutions could include establishing a universal basic income or introducing cooperative ownership models. Reskilling is an important first step, but it is no silver bullet: AI advancement is likely to trigger a paradigm shift in the nature of work. Let us urgently begin the project of ensuring a positive, purposeful economic future for all.
 

< Our Calls to Action >

4. Ban fully autonomous weapons of destruction.

AI may make future wars exceptionally lethal. This is no longer science fiction. We have already seen the use of semi-autonomous attack drones in conflict zones, including in the Russo-Ukrainian War. Drones are not the end game. Autonomous weapon systems are being primed for use at every stage of the kill chain—not just the final judgment call—creating a dangerous cascading effect on military decision-making. While these systems are purported to enhance efficiency and reduce casualties, they raise profound ethical concerns.
We cannot and must not recklessly move forward with the deployment of offensive autonomous weapon systems. When AI-enabled weaponry operates without human oversight, assigning responsibility in the event of a misfire becomes nearly impossible. These systems undermine the laws of war, nullifying the established principles of human dignity and proportionality in combat. What’s more, autonomous weapon systems can behave unpredictably, malfunction, or be hacked or misused, potentially resulting in unintended civilian casualties or indiscriminate attacks on non-military targets. At worst, AI could power weapons of mass destruction, inflicting large-scale casualties.
The specter of autonomous killers looms large. Our inaction promises a future where the instruments of war are unaccountable and potentially uncontrollable.

< Our Calls to Action >

  • GOVERNMENTS SHOULD ratify an international treaty that prohibits the creation, manufacturing, and deployment of fully autonomous, offensive weapon systems that lack meaningful human control. This can be modeled after previous international agreements like the Convention on Certain Conventional Weapons and the Biological and Toxin Weapons Convention, both negotiated within the last century.
  • PUBLIC AND PRIVATE ACTORS SHOULD invest in AI applications that encourage peacekeeping and conflict resolution. By harnessing AI’s potential to promote international stability, we can preserve human dignity and avert a new era of automated warfare.
 

< Our Calls to Action >

5. Cooperate for a safer today and tomorrow.

As AI becomes easier to access and more powerful, with fewer mechanisms for human control and interpretability, the potential for both intentional and unintentional misuse will multiply. New models might aid non-state actors in creating biological weapons. Human-like conversational AI might help conduct large-scale persuasion campaigns and meddle in elections. Rogue systems might be used to attack critical infrastructure and sabotage key networks.
There are some disturbing warning signs. Early reports indicate that AI intended for drug discovery can be repurposed to suggest tens of thousands of lethal chemical warfare agents, and that GPT-4 can be jailbroken to generate bomb-making instructions. Frighteningly, additional threats we cannot see coming may still be on the horizon. To be sure, many of AI’s harms — including those we have described earlier, from disinformation to algorithmic bias to labor displacement — are already with us, and they are well-documented. We see a moral imperative to confront existing risks and future-proof for oncoming ones.
Moreover, the energy-intensive nature of training and operating large-scale AI models threatens to jeopardize our progress on climate goals. Global AI governance must address AI’s growing carbon footprint and mitigate other environmental harms.
One thing is clear: our generation’s collective safety now hangs in the balance. But no one actor can prepare or protect us. AI development and its risks are increasingly borderless, and so too is the responsibility of AI oversight. The UNESCO Recommendation on AI Ethics, a promising first step, was endorsed by almost every country on the planet; the Bletchley Declaration, another powerful springboard for governance, resulted from unprecedented dialogue between the world’s major AI players. In the face of this challenge of global proportions, we must stand united. As with climate change and nuclear nonproliferation, we need international coordination to govern AI.

< Our Calls to Action >

  • COLLECTIVELY, GOVERNMENTS SHOULD establish a global authority to minimize the dangers of AI, particularly foundation models. This authority should set central safety standards, limit the proliferation of the most dangerous capabilities, and monitor the global movement of large-scale computing resources and hardware. It should maintain enforcement mechanisms with multilateral buy-in that ensure each actor abides by the rules. Given that AI is a moving target, the body should be agile and adaptive. It should seek to reduce acute AI risks and protect individual privacy and information security. And critically, to ensure that the body does not unduly stifle innovation or market competition, excessive regulatory burdens on low-risk applications should be discouraged.
  • COLLECTIVELY, GOVERNMENTS SHOULD create a global institute for AI safety that employs the world’s top scientific talent — similar to CERN, the European Organization for Nuclear Research — to make models more equitable, controllable, and understandable, with an eye toward promoting beneficial outcomes from AI. We need international collaboration to drive scientific progress on model fairness, alignment, and interpretability. This should be a global research priority.
  • COLLECTIVELY, GOVERNMENTS SHOULD ensure that the benefits of AI are more evenly distributed and lessen global environmental harms. They should invest in AI applications that aim to uplift emerging economies through innovations in healthcare, agriculture, education, and more.
  • INDIVIDUALLY, GOVERNMENTS SHOULD create domestic rules of the road aimed at preventing hazardous outcomes that track the training of foundation models, require evaluations for the most advanced systems, and create liability for developer negligence — especially the governments of the United States, China, India, the European Union, and the United Kingdom. It will take time to build intergovernmental consensus; national regulators should start moving now. Where possible, nations should apply and adapt existing antitrust, consumer protection, intellectual property, and non-discrimination laws.
 

We are guided by a deep optimism that a safe, equitable AI future for all is possible. But for us to get there, AI needs to be governed responsibly. This could be the most important moral challenge of our time. The clock is ticking, and our generation is ready for action — we have the greatest stake in what happens next.

Now, let our voices ring clear:

We call on world leaders to stand and strive with us for the future we all deserve.

< With Support From >

Our Signatories

Today’s Leaders

Mary Robinson

First woman President of Ireland, former United Nations High Commissioner for Human Rights

Yoshua Bengio

ACM Turing Award Winner, Professor at Université de Montréal, Founder and Scientific Director of Mila – Quebec AI Institute

Joseph Gordon-Levitt

Actor

Margaret Mitchell

Chief AI Ethics Scientist at Hugging Face

Audrey Tang

Taiwan’s first Digital Minister

Daron Acemoglu

Professor of Economics, MIT

Daniel Kokotajlo

Former Employee, OpenAI

Yi Zeng

Professor, Chinese Academy of Sciences; Founding Director, Center for Long-term AI

Stuart Russell

Distinguished Professor of Computer Science, UC Berkeley

Maya Wiley

President and CEO, The Leadership Conference on Civil and Human Rights

Tomorrow’s Leaders

Sneha Revanur, 19

AI 2030 Author, Encode Justice

Sunny Gandhi, 22

AI 2030 Author, Encode Justice

Luke Drago, 22

AI 2030 Author, Encode Justice

Adam Billen, 22

AI 2030 Author, Encode Justice

Lysander Mawby, 22

AI 2030 Author, Encode Justice

Jules Terpak

Journalist

AI Safety Initiative @ UC Chile

Student organization

Will Loosley

Student

Saharsha Navani

Generative AI Researcher @ Google DeepMind & DTC Core Member @ United Nations IGF

Nicolas Gertler

Chair of AI Issue Advisory Council and AI & Education Advisor, Encode Justice

< See All >

Public Support

Momin Butt

Data Analyst / Booking.com

Russell Altenburg

Senior Program Officer, Education / Leon Levine Foundation

Sheelah McCaughan

Hannah Billen

Elementary School Teacher

Ines Godinho

Director Learning Strategy, Innovation and Partnerships

John Dziak

Senior research associate / University of Illinois Chicago

Rose Genele

Chief Executive Officer / The Opening Door

Noemi Rodriguez

Founder & Owner / Lo Logramos Consulting

Gary Bunofsky

Founder / Loopdash

Laura Dumin

Professor of English and Tech Writing / University of Central Oklahoma

< See All >
< Support the Movement >

The clock is ticking. Join the fight for a better today and tomorrow.

We believe that AI can be a force for good in our world. Join our growing list of signatories to show your support for our platform and amplify our call to world leaders.