Currently an engineering manager at Google, previously co-founder and CTO of dataform, and maintainer of asciiflow. You can read a bit more about my career and interests here. Planning to try to contribute towards AGI in some shape or form from 2025 onwards. Here some things I've written about.
A high level overview of biological neurons and some of their dynamics, how they differ from artificial neurons in deep learning, and what we might be able to learn from them.
Framing machine learning as function approximation and gradient descent as a guided search process, exploring what the limits of gradient based learning might be, situations when it fails, and what learning in a gradient free regime might look like.
The start of a long term plan to contribute to the development of AGI, a first pass on key definitions for me and a high-level review of a number of research and problem spaces that I think are important.
An exploration into building a GPU tensor library for machine learing in Java, with typed shapes to capture shape errors at compile time, leveraging ArrayFire and the new Java 21 Foreign Memory Access API.
Reviewing a different take on reasoning and system 1/2 thinking from the book - The Enigma of Reason - and how that might impact AI research, it's relation to inference time search and attempts to give LLMs system 2 thinking.