Software engineer, Engineering manager, previously co-founder and CTO of dataform and currently at Google. Planning to contribute towards AGI in some shape or form from 2025 onwards. Currently trying to learn by writing about AI and Neuroscience.
A quick write-up on tagging and describing the ARC training dataset tasks, merging it with evaluation data for some LLMs, doing some analysis on it, and putting it all on a site so you can explore it.
Computational irreducibility says something fundamental about computation, and I believe it suggests that learning programs is a necessary part of building general intelligence. I explore these concepts, how they relate to cellular automata, ARC, and O1.
A high level overview of biological neurons and some of their dynamics, how they differ from artificial neurons in deep learning, and what we might be able to learn from them.
Framing machine learning as function approximation and gradient descent as a guided search process, exploring what the limits of gradient based learning might be, situations when it fails, and what learning in a gradient free regime might look like.
The start of a long term plan to contribute to the development of AGI, a first pass on key definitions for me and a high-level review of a number of research and problem spaces that I think are important.
An exploration into building a GPU tensor library for machine learing in Java, with typed shapes to capture shape errors at compile time, leveraging ArrayFire and the new Java 21 Foreign Memory Access API.
Reviewing a different take on reasoning and system 1/2 thinking from the book - The Enigma of Reason - and how that might impact AI research, it's relation to inference time search and attempts to give LLMs system 2 thinking.