Algon33 Blog
Unfold CV
Algon
London, UK | algon33.ar@gmail.com
algon-33.github.io | lesswrong.com/users/algon | @Algon_33
PROFILE
Generalist project lead with a physics research background. I write well, synthesise complex technical material into accessible forms, manage small teams across diverse skill sets, and learn new domains fast. Two years leading an AI safety research communication organisation, producing several hundred reference articles, AI safety distillations, and research tools used across the field.
EXPERIENCE
Project Lead — AISafety.info (Stampy) Jul 2024 – Present
AISafety.info is a non-profit research communication project (founded by Rob Miles) producing living documents on the AI safety landscape, including technical research agendas, policy, and alignment proposals.
– Led a team of up to 10 contributors — writers, editors, programmers, designers, and researchers — with varying full-time, part-time, and volunteer involvement.
– Maintained and directed several hundred articles communicating AI safety concepts to a broad audience. Articles have been referenced by journalists, used in interview preparation by researchers, and serve as introductions to the field for new entrants to AI safety.
– Produced distillations of technical AI safety research, making frontier alignment work accessible to wider audiences.
– Oversaw development of an AI safety Q\&A chatbot, well-regarded by researchers in the field. Managed improvements to the RAG architecture and prompt engineering.
– Directed improvements to the Alignment Research Database, a comprehensive collection of AI safety literature cited in published research papers. Team built an MCP server enabling programmatic access for chatbots and research tools.
– Redesigned introductory article series; managed video content pipeline; handled analytics, infrastructure, and stakeholder communication.
AI Evaluation Specialist (Contractor) — Equistamp Jan 2024 – Jul 2024
– Designed and augmented AI capability evaluations for frontier model assessment.
– Sourced and validated high-quality evaluation benchmarks across diverse capability domains.
Freelance Research & Writing — Various Clients 2021 – 2024
– Atlas Fellowship: Authored 30-page research report on ambitious methods in education, analysing approaches that substantially outperform standard practice along axes of generality, effectiveness, cost, and scalability.
– AI Strategic Reasoning Assessment: Produced technical report on methods for detecting strategic reasoning capabilities in near-term AI systems, commissioned by an AI safety research organisation.
– Metaculus Automation: Built Python tool automating prediction submission for a top-ranked Metaculus superforecaster by extracting and replicating community probability distributions.
– Provided research advisory and strategic support to independent professionals on technical projects.
EDUCATION
MPhys Physics with Theoretical Physics — First Class Honours — University of Manchester 2015 – 2019
– Thesis: demonstrated that environmental noise can produce a constant-factor speedup in basic quantum computation operations (polaron transform applied to a Landau-Zener transition).
SELECTED OUTPUTS
– AISafety.info article library — Several hundred articles covering the AI safety research landscape, from introductory overviews to technical distillations. aisafety.info
– Alignment Research Database & MCP Server — Comprehensive database of AI safety literature, cited in published research. MCP server enables programmatic access for chatbots and research tools.
– AI Safety Q\&A Chatbot — RAG-based chatbot drawing on the full article library and database to answer questions about AI alignment research.
– Blog & writing — algon-33.github.io
KEY COMPETENCIES
Project & team management • Technical writing & distillation • AI safety research landscape • Research communication • Stakeholder management • Grant writing • Python • Data analysis • Physics & mathematical modelling
About
This is the blog of a temporarily embarrassed Jupiter brain, otherwise known as Algon33.
My best blog post to date is Do One New Thing A Day To Solve Your Problems
You can find me on:
All Posts (47)
- Discovery Fiction: The Max Flow Algorithm (2026-03-05)
- People Can't Read, And What's More, They Don't Want To (2026-03-03)
- Representation Theorems in Alignment (2026-03-02)
- Everyone Can Be High Status In Utopia (2025-12-01)
- Choose Your Failure Modes (2025-11-30)
- Opus 4.5 is funny (2025-11-28)
- You can just do things: 5 frames (2025-11-23)
- Self-esteem" is distortionary (2025-11-23)
- Natural Emergent Misalignment from Reward Hacking (2025-11-21)
- PSA: For Chronic Infections, Check Teeth (2025-11-20)
- How critical is ASML to GPU progress? (2025-11-19)
- No One Reads the Original Work (2025-11-18)
- Why So Much Moloch? (2025-11-16)
- Your Clone Wants to Kill You Because You Assumed Too Much (2025-11-15)
- List of great filk songs (2025-11-15)
- A bad review != a bad book (2025-11-12)
- A pencil is not a pencil is not a pencil (2025-11-10)
- Liouville's Theorem and the Second Law (2025-11-09)
- Continuous takeoff is a bad name (2025-11-06)
- Forgive Savants Their Midwittery (2025-11-03)
- Supervillain Monologues Are Unrealistic (2025-10-31)
- Genius is Not About Genius (2025-10-30)
- Centralization begets stagnation (2025-10-30)
- All the labs AI safety plans: 2025 edition (2025-10-28)
- Credit goes to the presenter, not the inventor (2025-10-26)
- Remembrancy (2025-10-25)
- The Doomers Were Right (2025-10-22)
- Libraries need more books (2025-10-18)
- Meditation is dangerous (2025-10-17)
- Book Review: To Explain the World (2025-10-16)
- Some astral energy extraction methods (2025-10-15)
- Open Global Investment: Comparisons and Criticisms (2025-10-15)
- What is Lesswrong good for? (2025-10-13)
- Predictability is Underrated (2025-10-13)
- Don't Mock Yourself (2025-10-12)
- I wasn't confused by Thermodynamics (2025-10-11)
- What does it feel like to understand? (2025-10-10)
- Stars are a rounding error (2025-10-09)
- What shapes does reasoning take but circular? (2025-10-08)
- Notes on the need to lose (2025-10-06)
- Chaos Alone is No Bar to Superintelligence (2025-10-06)
- Maybe social media algorithms don't suck (2025-10-05)
- What I've Learnt About How to Sleep (2025-10-04)
- Do One New Thing A Day To Solve Your Problems (2025-10-03)
- In which the author is struck by an electric couplet (2025-10-02)
- Why's equality in logic less flexible than in category theory? (2025-10-01)
- Toggle Hero Worship (2025-09-10)