Must Read
Despite all this, in many ways we still feel like we're a long way from a truly intelligent machine, and AI training remains expensive and slow compared to what we need.
There are several new trends that AI research should be moving forward in, and the one I find most interesting is the return to neuroscience.
The AI community has debated for decades about how much we need to understand the human brain in order to create an artificial one. Some people believe that creating a chip with elements that act like neurons and connecting them together will get us to true artificial intelligence.
Others believe that intelligence is an algorithm in the brain that can run on many different types of calculations, and we just need to figure out what it is. There is also a counterpoint that says we don't really need to understand the brain. After all, we don't fly like birds - we invested in airplanes. Intelligence can be done in many different ways.
I bring this up because of a recent article I read about what AI can learn from children.
Here is the relevant and interesting part:
Dr Lorijn Zaadnoordijk and Professor Rhodri Cusack from the Trinity College Institute of Neuroscience and Dr. Tarek R. Besold of TU Eindhoven in the Netherlands argues in his paper "Lessons from Infant Learning for Unsupervised Machine Learning" that better ways to learn from unstructured data are needed. For the first time, they make concrete suggestions about what specific insights from infant learning can be fruitfully applied to machine learning and how exactly to apply those learnings.
They say machines will need built-in preferences to shape their learning from the start.
They will need to learn from richer datasets that capture how the world looks, sounds, smells, tastes and feels. And like infants, they will need to have a developmental trajectory where experiences and networks change as they "mature."
The idea of instilling some basic preferences in machines before their learning begins is an interesting one, and one at the center of a long debate in cognitive linguistics about how much of human language is based on innate structures.
I don't have a strong opinion on whether this will work, but I think we'll need many different approaches to get to GAI. No one knows where the final breakthrough will come from. If you're working on ideas like this, I'd love to hear from you.
Thank you for reading.