I've been very interested in AI for quite some time, and when forced through my general education requirements to stop taking CS classes, I thought a class covering components of the nervous system, how the brain works, and how we learn, to be the closest a natural science requirement could ever get to computer science. Never did I think though that this course could change the way I look at the my passions, my job, and even people as a whole.
What I found the most shocking was how my appreciation for the natural was amplified and focused when viewed through the lens of technology and algorithmic thinking. Too often I had shied away from biology because of my perception of inexplicable chaos. I preferred math, statistics, econ, and computer science where order rules, and one can reason through an unpredictable universe with the tools of logical constructs. But when breaking down human thinking to its very core components, down to the sodium and potassium imbalances that open channels within cells, that cause voltage differential to travel along a cell, that cause neurons to fire, that cause patterns of neurons to fire along the axons in your spinal cord, that cause your muscle fibers to contract. There was structure. There the world made sense.
Your brain is a neural net
This has to be the most superficial and obvious connection. I mean, the whole idea of neural networks in machine learning, from name to implementation, is stolen from the human brain. However, what was interesting here was that not only did I gain a greater understanding for the ML concept by studying the brain, but I also gained an incredible appreciation for human learning by viewing it through an ML lens.
You learn from training data
From the time you were a child, your brain has been running non-stop training runs for everything. Learning to walk was a series of trials with the parameters being the electrical signals to your leg muscles and the objective function being whether you got to where your child-self wanted to go or whether you fell and experienced pain. After weeks and months of training data, you learned the rough parameters for how to move your legs to walk. It sounds simple, but it's incredible how many things you can begin to explain when you take the analogy further. For example, let's extend the model by splitting the parameters into newly learned and previously existing sets. See how many of these biases you can explain by tuning the weights of these two sets. In fact nearly all of these can be explained by the combination of this and another key takeaway, emotions rule.
Emotions are your objective function
Every machine learning algorithm needs an objective function, something to maximize or minimize, a goal to optimize. For humans, it's our emotions. We might think we do things for rational, objective reasons, but even then at the end of the day it's either because experience has told us this path will lead to an emotionally satisfying outcome or simply because the process of rationalization is emotionally satisfying. When you couple this with all of the beliefs that bear emotional significance, you can begin to explain fascinating human behaviors and nearly every rational bias on that list.
Your nervous system is a frickin' Fry's
These analogies don't stop with the more obvious "neurons":"neural network" variety. Lessons from the human body have been stolen all throughout technology and an understanding of the technology has immensely deepened my appreciation for their organic counterparts. I highly encourage anyone dabbling in AI or just technological innovation at large to deepen theirs as well.
- Your olfactory bulb is a hashmap
- Your retina is a CMOS censor
- Your ciliary muscle is an autofocus motor
- Your epidermis is a capacitive film
- Your optic cortex does chroma subsampling and JPEG decompression
- Optical illusions are moiré and compression artifacts
Tech is in your brain my friends.