Israel Mason-Williams: My Experience as a Talos Fellow

19th December 2025 | News, Student News

News > Israel Mason-Williams: My Experience as a Talos Fellow

Earlier this year STAI CDT aligned PhD student, Israel Mason-Williams, was selected for a Talos Fellowship as part of the 2025 Spring cohort. ​The Talos Fellowship is a programme run by the Talos Network, aimed at launching and accelerating European policy careers focused on artificial intelligence. We asked Israel to share his experience of the fellowship in this blog post below.

In January, I was selected from thousands of applicants as one of 20 Talos Fellows. For the following three months, alongside my PhD, I joined the Spring cohort to learn about the emerging AI Governance landscape shaped by generative AI and the EU AI Act.

Each week the curriculum included reading, discussing, and debating with peers on topics such as compute governance, global regulation, AI geopolitics, and economic integration. In addition to these discussions we heard from experts like Max Von Thun from the Open Markets Institute and Lennart Heim from RAND.

A key benefit of the Talos Fellowship was access to a diverse group focused on shaping AI Policy and Governance. Through the placement program many fellows stay active in the AI Governance field. The community created through the program enables learning from multiple perspectives and immediate discussion of timely AI news such as when DeepSeek-R1’s release, which sparked debate on global tech, finance models for AI and rapid progress.

The fellowship goes beyond explaining the AI policy landscape by actively encouraging you to find your role in AI governance. As part of the program, you are supported to attend a policy summit in Brussels, the EU epicentre of policy, which puts you on a fast track to impact. This experience offered deep insight into the European Policy mechanism. During this week, we heard from individuals from organisations such as the Future of Life Institute, Centre for Democracy, EU AI Office and DG Connect. Alongside this we took participated in hands on workshops focused on the EU policy cycle, pitching new policy as well as gaining an understanding of strategic communication. After all of this thinking, discussion, and collaboration we rounded the week off with some down time at the PLUX EU policy watering hole in the famous Place du Luxembourg.

Israel during his Talos Fellowship

After the fellowship, you join a strong alumni group eager to push the frontier of AI regulation. I am now involved in organising events for the Talos community in London to improve the interconnectedness of those who care about AI policy and safety. The fellowship inspired me, given my research background, to produce a research paper on what I saw as the biggest bottleneck to effective AI policy: strong AI research standards. To this end, I submitted a paper titled, ‘Reproducibility: The New Frontier in AI Governance’, to the Workshop on Technical AI Governance (TAIG) at the International Conference on Machine Learning (ICML 2025). The paper has since been presented at ICML in the TAIG workshop and at the Open Science Conference (OSC) in Hamburg, earlier this year as part of their special focus on the intersection of Open Science and AI.

The paper explores how research directly impacts economic and social policy taking the core example of reproducibility crises in Economics, Psychology and Cancer Biology as warning signs of how irreproducible research can erode the impact of policy, result in opportunity costs and, in the most extreme cases, contribute to loss of life. In the paper, we take a positive view of how AI research, which is used to justify policy decisions for AI regulation, can learn from other scientific domains to curb the unintended consequences of poor research practice. We identify three reproducibility protocols which can improve the signal to nose ratio of AI research, specifically, these are preregistration, statistical leverage, and negative result reporting. 

Pre-registration is a tool that is leveraged by researchers to ensure that experiments are transparent and valid before experimentation occurs. This allows methodological issues to be highlighted prior to research submission to reduce post-diction and bias (such as p-hacking) in research. Statistical leverage calls for researchers in AI to use appropriate statistical significance tests and sample sizes to ensure that concrete progress is made beyond attributing importance to random effects, allowing for a more effective understanding of the impacts of specific factors on neural networks. Finally, negative result reporting aims to change publication incentives not to have a preference for research presenting only positive results, but to value negative results just as much. Such acknowledgement allows for a better understanding of the limitations of researchers’ prediction ability, reduces the number of wasted resources in research providing a signal on what does and doesn’t work and, hopefully, help uncover why. While none of these reproducibility protocols are a silver bullet, collectively they represent strong candidates for strengthening scientific practice in AI, as well as improving the information environment for policymakers such that they can make informed decisions. 

I am grateful to have joined the Talos Fellowship and its vibrant AI policy community. If you’d like to get involved, watch for the next application cycle on the Talos website!