Aidan is researching technical AI alignment, with a particular emphasis on ML interpretability and robustly safe AI to AGI. They are currently pursuing a degree in Mathematics at the University of Bristol, and have collaborated with researchers from Apollo Research and EleutherAI. A recent research project of theirs was cited by the Anthropic interpretability team.
Contact: aidanprattewart (at) gmail.com
Lucy started coding at age 7 and became a senior developer at a tech startup at age 18. She's since switched to AI safety research, and has collaborated on her work with researchers from FHI, CHAI, and FAR AI. She's also done ARENA and AISC. She's currently a PhD student at the University of Bristol researching AI interpretability.
Contact: lucyfarnik (at) gmail.com
Gaurav is diving into the world of AI regulation while pursuing a Law Degree at the University of Bristol. Before this, they interned at the Centre for Effective Altruism. Right now, they’re digging deep into a final year project that explores how laws and computing power can better govern AI. Gaurav is also keenly interested in AI policies, both in China and globally.
Contact: rz21873 (at) bristol.ac.uk