Since I realized my formal computer science education is practically over, I decided to learn a little something extra this summer. After surveying the many options on Coursera, I settled on Machine Learning with Professor Andrew Ng. And I can’t wait.
I’ve always been fascinated by artificial intelligence, computer vision, natural language processing, and other machine learning applications. How could a programmer possibly codify all of the possible paths these problems could take? Is it even possible for a machine to "learn"? Well hopefully some insights to these questions will come over the next couple weeks.
After completing this machine learning course from Stanford on Coursera, I'm even more deeply fascinated and inspired by artificial intelligence, and its implications for the future of software engineering, but I also feel somewhat cheated. From my limited viewpoint thus far, I can overly simplify the current state of machine learning as human intelligence with computer automation, which really isn't fundamentally different from other types of software systems. (That is what information systems are all about anyway). Let me elaborate.
The majority of the course covered various types of regression and classification problems. These I had seen before from my studies in statistics and the only interesting programatic portion of these was the massive amounts of data that can be obtained and classified. Here the machine learning portion is really just a pseudonym for 'can acquire lots of information' and doesn't constitute what I previously considered real learning. I claim this type of artificial intelligence really isn't artificial intelligence at all as the onus is entirely on the engineer to do all of the thinking for the algorithm ahead of time and simply solves an optimization problem. People in tech these days frequently throw around terms like 'big data' and 'machine learning' to obfuscate the real value of their platform more often than to justify it, and I can see why. I used to place enormous weight on the term machine learning, and it conjured awestruck visions for the magical possibilities, but the shroud of mystery has been lifted, and I'm left with cheap (yet still very clever and powerful!) tricks.
Later in the course was where I became amazed at the power and implications of what had just happened. Professor Ng introduced the concept of neural network learning to us, a class of machine learning methods frequently referred to as 'deep learning'. In short, this type of machine learning attempts to mimic the way the human brain learns, and for a machine learning newbie, it was mind blowing the types of results you could get. It also had a startling implication for me in that once processing power became fast enough to fully mimic the human brain, why would I even need to program it? What would my job be as a software engineer, if I can just dump all of the data in the world to this thinking machine, and it makes all the necessary connections, algorithmic decisions, and logical realizations on its own?
It seemed that either the methods were so simplistic that all the intelligence was done by the programmer or the programmer simply created a brain that did the rest. In my mind that was a lose-lose for machine learning. Skynet certainly was frightening to think about, but until that day comes when Artificial Intelligence simply becomes Human Intelligence, I suppose I'll just keep trying to improve my own.