top of page

Grants

$1,000 - $15,000 for original reporting on artificial intelligence and its impacts.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

About

As artificial intelligence grows more advanced, the technology and the people building it grow increasingly consequential.​ We believe journalism plays a crucial role in helping the public understand AI — and in holding companies and policymakers to account.

 

Tarbell offers grants of $1,000 - $15,000 to support original reporting published in established outlets, whether from freelancers or staff. We primarily focus on written journalism, but we also fund journalism in other formats. This round of grants will close May 31st. 

 

We’re seeking to fund forward-looking stories, examining how today’s technical advancements and policy decisions lay the groundwork for how artificial intelligence will shape our future. As AI systems become increasingly capable, we may only be beginning to see their influence on society. In-depth reporting can improve what we see next.

Pitch us on critical AI beats

 

Investigations into frontier AI companies

National and international AI policymaking

Integration of AI in governments and militaries

AI capabilities, safeguards, and evaluations

Future of work and society in the age of advanced AI

Investigations into frontier AI companies

Companies at the forefront of AI development—such as OpenAI, Anthropic, Google DeepMind, xAI, Meta, and DeepSeek—are racing to build evermore capable systems behind closed doors. With growing influence but little regulatory oversight, there’s an urgent need for accountability journalism. We're seeking journalism that pierces the corporate veil to examine how these influential companies actually operate, who shapes their decisions, and what safeguards and ethical codes exist—or don't.

Pieces we admire on this topic can be found here, here, and here.

. Possible story directions: → Track compliance gaps between companies' safety commitments and their actual development and deployment practices. → Investigate potential misconduct by AI companies or their executives that contradicts public messaging or commitments. → Investigate how companies respond to documented instances of harm from their systems, from deepfakes to algorithmic bias. → Document whistleblower experiences and the systems (or lack thereof) in place to protect internal critics. → Uncover internal perspectives on AI progress, promises, and perils from employees who build these systems daily. → Examine how frontier labs are using AI internally to automate parts of their own research process. → Investigate the decision-making process for when companies will delay development and deployment because of dangerous capabilities, such as cyber and biorisks.

National and international AI policymaking

AI is being governed through a complex web of emerging domestic policies, international agreements, and strategic competition. As countries race to balance innovation with control, questions about who shapes AI's future—and in what ways—remain largely hidden from public view. We're seeking journalism that reveals the inner workings of AI policymaking across borders, examining how domestic lobbying, global forums, supply chain dynamics, and back-channel diplomacy determine the rules for increasingly powerful technologies.

Pieces we admire on this topic can be found here, here, and here.

. → Document the revolving door between government agencies and AI companies, examining how connections between industry and government officials shape policy decisions in the US, UK, Europe, and China. → Track AI lobbying expenditures and strategies at national, state, and local levels, and signs of their influence. → Analyze proposed AI regulations and responses to these proposals, such as US state-level regulation, US congressional AI action, and the EU AI Act’s Code of Practice. → Investigate the dynamics of international AI forums and agreements, examining which nations and organizations are steering global governance initiatives like the upcoming India AI summit, who's interests are empowered, and what tangible outcomes are being realized. → Report on early signs of the Trump administration’s upcoming AI policies and which coalitions are vying for influence. → Monitor Chinese academic and industry debates around AI ethics and governance, highlighting the range of perspectives within China's AI community and how they change over time. → Investigate resource gaps at US, UK, European, and Chinese government agencies tasked with AI oversight, in terms of technical expertise, funding, and enforcement mechanisms (e.g., enforcing chip export controls).

Integration of AI in governments and militaries

As AI systems move from research labs to real-world applications, governments and military organizations worldwide are adopting these technologies for everything from administrative efficiency to battlefield advantage. We're seeking journalism that investigates this rapidly evolving landscape, examining the entanglements between public institutions, defense strategies, and AI companies.

Pieces we admire on this topic can be found here, here, and here.

. → Document the AI capabilities being actively developed or acquired by military and intelligence agencies around the world, as well as their corresponding safeguards and testing protocols. → Follow the funding streams from defense departments to AI companies, revealing which technologies are being prioritized for which purposes. → Investigate the ethical frameworks (or lack thereof) governing military AI applications and how they're being implemented in practice. → Track the arms race dynamics emerging between nations as they compete for AI advantages in defense and intelligence. → Investigate whether national security interests are driving new forms of collaboration between governments and AI companies. → Track the evolution of government AI usage from basic automation to more complex applications, exploring how agencies' strategies and policies are adapting to increasingly capable systems and who this affects.

AI capabilities, safeguards, and evaluations

The technical capabilities of AI systems are evolving in ways that sometimes surprise even their creators. We're looking for journalism that documents these emerging capabilities, interrogates the methods used to test and evaluate increasingly powerful models, and examines efforts to make these systems safer and more secure.

Pieces we admire on this topic can be found here, here, and here.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

Future of work and society in the age of advanced AI

The potential of advanced AI to transform society remains uncertain, but it is also consequential and malleable. We're looking for forward-looking, high-quality feature pieces that identify what today's AI developments foretell about future possibilities. From information ecosystems to democratic processes, from climate impact to human connection, we want reporting that takes seriously the possibility of significant disruption while critically examining the barriers to and likelihood of such changes.

Pieces we admire on this topic can be found here, here, and here.

. → Compare competing economic forecasts of AI's impact, ranging from record-breaking growth predictions to modest outcomes due to implementation barriers. → Project the climate impact of AI infrastructure expansion from massive data centers buildouts like OpenAI's Stargate. → Explore how AI may transform information ecosystems, examining what today’s trends suggest about how our relationship with facts, expertise, and media consumption might evolve with more powerful systems. → Chart the future of an industry like law, medicine, and creative fields as AI begins to automate or accelerate cognitive tasks—or not.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

How to apply

Please apply via our application form, which asks for basic information about you and your story.​​ To be considered for this round of grants, apply by the end of the day on May 31st, 2025.​

We welcome submission from all experienced journalists and media creators: both staff writers/editors/producers and freelancers are welcome to apply for grants. A background in reporting on AI and/or technology is desirable, but not mandatory.

We encourage applicants to apply with a letter showing interest from an editor or relevant platform director (see template). If you can’t secure such a letter before your application, we’ll ask you to provide one before we distribute any funding. Your publication or distribution platform of choice must be able to accept grant-funding for stories. 

We aim to evaluate all applications within 6-weeks of receipt at the latest. If your story is time-sensitive, you can ask us to expedite the evaluation process. All shortlisted applications will be reviewed by at least two members of our judging panel.

If you have any questions about the application process, please contact michel@tarbellfellowship.org.

This round of Tarbell Grants is generously supported by our donors, with the exception of Open Philanthropy, who do not fund our grants. Our donors have no involvement in the grant selection process. If you're interested in supporting the next round, you can donate here or get in touch.

Experienced journalists and media creators. We expect most successful applicants to have a strong background in journalism, ideally in AI or technology reporting. Journalists with an investigative background are particularly encouraged to apply. We accept applications from both freelance and staff reporters, as well as as well as from creators of journalistic podcasts, video content, newsletters, and other media formats

Neglected stories. We are keen to fund stories and investigations that will not otherwise be published.

Reach. We hope grantees' stories will be read, heard, or viewed widely, and by people with decision-making power. We encourage applicants to apply with a letter showing interest from an established publication or distribution platform.

Impact. We care about stories that make a difference. We hope the stories we fund will lead to meaningful change in the world, whether that be raising awareness of an underdiscussed harm, or catalyzing policy change.​​​​

Please do not rule yourself out because of these criteria, though. If you have a strong story idea and believe you have what it takes to write it, please pitch us.

What we look for

Judging Panel

FAQ

Do you consider formats other than written articles?

While our primary focus is on written journalism at established publications, we do consider exceptional proposals in other formats like podcasts, newsletters, and short documentaries. Content in such formats should still be "in the spirit of journalism": it should adhere to journalistic standards like truth-seeking, independence, and fairness. We also need to see how such content will reach a large or important audience.

What can I use the money for?

Your grant money can be used for any costs incurred in producing the story, including the costs of your time and labor. This is true for both freelancers and staff reporters. You can also use the money for other reporting expenses, such as travel, API usage, or purchasing data sources.

Do you expect publications to contribute to reporting costs?

No. We are happy to fully fund stories from both freelancers and staff reporters. We are, however, more likely to fund stories where publications do contribute.

 

Can teams apply?

Yes, teams of journalists are very welcome to apply. Please submit a single application, and in the “about you” section provide information for each team member. Please decide in advance how your team will split the grant money and let us know in the budget section of the application form.

 

Can I apply for a series of stories?

Yes — we welcome applications for a reporting series, rather than just a one-off piece.

 

Can I submit multiple applications?

To help us manage the volume of applications, we ask that you only submit one application at a time. If we reject your application, feel free to apply again with a different idea.

 

Do I need to have a publication venue finalized?

We encourage applicants to apply with a letter showing interest from the editor of their publication-of-choice, but this is not strictly required. The ideal letter is something like “if Tarbell funds this piece, we will publish it” (see template). 

If we’re excited about your piece but you don’t have an outlet secured, we’ll likely tell you that we’ll fund the story if you can find an outlet, allowing you to pitch your piece with our commitment.

 

How do we need to credit you?

We ask that all published stories include the following line: “"This story was supported by a grant from the Tarbell Center for AI Journalism.”

 

Is editorial independence guaranteed?

Yes. You are free to publish whatever you (and your editors) like. We will not edit your project.

Will you help with my reporting?

If you would like help, we’re very happy to offer guidance and support. We can discuss the story with you, suggest people for you to talk to, and make introductions if needed. But this is entirely up to you: we’re equally happy to take a completely hands-off approach.

Do I need to keep you updated on the progress of my story?

Not proactively, but if you are working on a longer duration story we may check in to see how things are going, and we expect you to give us an update if we do. We also ask all grantees to tell us when a story is published.

Subscribe to our newsletter

Tarbell logo
bottom of page