Exploration & Emergence?
Exploration & emergence is a whimsical name for my website, and this blog post explores the philosophy of the name.
Philosophy
The name comes from two related ideas that have been going around my head:
The path of joyful learning
The first, is that it sums up how I love to learn:
- Exploration exposes us to new ideas and framings which capture our attention
- We reflect on them, playing around with them in our mind and seeing how they mesh them with existing ideas in our tree of knowledge
- Exciting new thoughts, reflections and insight emerge from this process
I invite you to join in on my explorations and reflections. If you have something interesting to contribute, feel free to drop me emails at exploration-and-emergence [at] david-edey.com
.
The path to general intelligence
I’ve been thinking a lot about learning, knowledge sharing and collaboration lately. Having a young daughter is magical, and she is constantly picking up new words, abilities and ways of interacting with the world. On the tech side, I’ve been picking up Machine Learning, collaborating with LLMs and contemplating AGI.
These have all got me thinking about how we learn, how we are useful, and how this translates to intelligence.
On the one hand, my 16 month old daughter is certainly less useful in a workspace than an LLM. An LLM has an immense breadth and depth of knowledge and capacity to do useful work. But there are some things my daughter can do much better. She has a persistent, growing knowledge and she excels at learning from exploring the environment, copying behaviours and internalising expectations from the world around her. She then plays to consolidate and build new ideas. She is able to adapt, extrapolate and surprise.
I believe a system’s intellect reflects its structure. Phrased another way, the structure of how a system is built to learn guides and effects its emergent intellectual properties and capabilities.
As humans, our genetics guide the development of a brain with areas specialized to different tasks (e.g. vision, calculation, feelings, planning or memory-management), and networks these together in a way which enables learning with strong inductive biases which aid our survival. And it’s not just the brain - the nerves and hormones contribute to us learning effectively by creating signals to help us expect, learn and interact with the world.
But what structures are necessary for artificial general intelligence? Whilst I believe LLMs could simulate an AGI (through use of external scratch-pads for memory and sub-agents and prompts for tasks), I don’t think their structure is efficient at realizing a real-time human-like intelligence. Where can we look instead? AI Experts such as Richard Sutton talking about the OAK architecture present quite a simple picture. A picture that is beyond the primary capabilities of LLMs, but a picture that is definitely achievable in the next few years.
The next few years will be interesting, that’s for sure.