This is the website of Lewis Hemens - previously co-founder and CTO of Dataform YC W18, Senior Staff SWE at Google, maintainer of asciiflow, spending 2025 working on ARC-AGI, hoping we can avoid the heat death of the universe.
A review of ARC-AGI 2024 top and related papers, a summary of the key approaches, and ideas for ARC-AGI-2 and ARC 2025.
A plan for 2025, research in AGI, working on ARC, predictions, roadmap, personal goals.
A check-in of some very much work in progress efforts to prepare and feed in ARC problems into an RL fine-tuning loop.
A quick write-up on tagging and describing the ARC training dataset tasks, merging it with evaluation data for some LLMs, doing some analysis on it, and putting it all on a site so you can explore it.
Computational irreducibility says something fundamental about computation, and I believe it suggests that learning programs is a necessary part of building general intelligence. I explore these concepts, how they relate to cellular automata, ARC, and O1.
A high level overview of biological neurons and some of their dynamics, how they differ from artificial neurons in deep learning, and what we might be able to learn from them.
Framing machine learning as function approximation and gradient descent as a guided search process, exploring what the limits of gradient based learning might be, situations when it fails, and what learning in a gradient free regime might look like.
The start of a long term plan to contribute to the development of AGI, a first pass on key definitions for me and a high-level review of a number of research and problem spaces that I think are important.
An exploration into building a GPU tensor library for machine learing in Java, with typed shapes to capture shape errors at compile time, leveraging ArrayFire and the new Java 21 Foreign Memory Access API.
Reviewing a different take on reasoning and system 1/2 thinking from the book - The Enigma of Reason - and how that might impact AI research, it's relation to inference time search and attempts to give LLMs system 2 thinking.