So, recently I got seduced again by all the flashy, hot stuff that’s going around in Artificial Intelligence (henceforth only referred as AI). There was this day when it gave me chills down my spine, and it didn’t have any effect on people around me, I knew it then I had feelings for AI. So, as gradient descent does, I stepped towards it. I took courses, I helped out an assistant professor with research related to Support Vector Machines, I tried to read many AI papers and even did a 6-month long thesis on using MaxEnt approach in Inductive Logic Programming. Everything went fine.
But fine ain’t good ever. And fine is never really great. And I wanted oomph! I wanted sweat. I wanted Electricity. I wanted Amazement. I wanted something phenomenal to happen between me and AI. But, for that to happen, I would have to give it time, respect, effort and a lot of other stuff that automatically falls under passion.
So, this is the day. The day 1. The pilot episode. The first step, in a journey of the AI world. Every day, I will walk a few steps in this world. Even if it’s the busiest day and I have to read up stuff while I’m shitting (which btw, isn’t a bad idea!).
I have recently sorted my life a little out for this. Started listening to AI podcasts, wiped out all reddit subscriptions that were not related to the field, started following topics on medium and always have some pinned tabs on chrome for whenever I get time to study. As a result, I’ve been learning things every day. And as a person who doesn’t have a long short term memory, I felt it’s good to record my beautiful journey in a journal. This blog is basically that.
And more. It could be a wonderful place for some of you to learn and stay up to date with AI.
But that being said, I’m not gonna spend hours editing out my posts or preparing them specifically for an audience. They’ll all be readable enough to get my point across, I guarantee. However, I’m not here to give out lectures. I’m here to share whatever I see in this AI world. So, while I will enjoy your feedback and consider your valuable suggestions, not every opinion, like, dislike will lead to change in the way I study or write. Remember that a painter doesn’t think of its audience before creating paintings.
Alright, enough with this one-time introduction.
Deep Neural Networks
I have some surface knowledge about neural networks. Recently, on the suggestion of a known data scientist, I started following http://neuralnetworksanddeeplearning.com to dive into deep neural networks. Deep neural networks are the superstars right now – smashing every field under AI, solving all difficult problems in focus, majorly Natural Language Processing (“Okay, Google”), and Computer Vision (“Tag Billy Joe in this photo”).
The chapter one explains very nicely how a real world decision taking task can be seen as an optimisation problem with multiple binary constraints (this way or that way). From there it talks about the simplest models – perceptrons, telling its advantages and its disadvantages. The disadvantages lead to something that sounds complicated but isn’t – Sigmoid neuron. This is basically perceptrons on drugs where variables aren’t binary anymore and can take up any real value. Then he explains how any logical function can be represented via them and that means it can be widely applied. This does make you think that anything can be formed using neural networks then. However, logic gates could do the same too. They don’t seem to make autonomous driving cars though. And that is where the “machine learning” part comes up. He introduces an algorithm called Gradient descent which is used then to arrive at the optimal solution to the optimization problem.
Further, this chapter will apply a simple neural network on MNIST (Digit recognition) dataset. I’m yet to reach there, though.
You know how a baby learns, all by itself? Without providing any information whatsoever? Like if he throws a ball, it’s gonna go in a trajectory forward on the floor. This is the same with a program. The algorithm just takes random autonomous decisions –> the environment changes according to its actions –> it reacts and the process goes on and on… Since this sort of learning does get better and is shown to work, it raises important questions. If we train such a model enough, is it going to behave just as intelligently as human beings? Because here we’re not going to give parameters and weights to things like in other Machine learning algorithms. Here, the program is at free will and makes sense of the world on its own. It could have a look at the world we humans have never ever done probably.
It sounded very interesting and I found this MOOC course that I will complete in the upcoming future, or at least give it a try – http://liris.cnrs.fr/ideal/mooc/
Prevent machines to outsmart us
Google’s Deep Mind people have come up with a framework which can make sure that humans have control to sort of a red emergency button, which they can always hit on the machine to stop it from acting on decisions which could be harmful to humans or to these intelligent agents themselves. The framework is described in detailed here – http://intelligence.org/files/Interruptibility.pdf
I’m yet to give it a proper read but there are already some algorithms which support this sort of interrupt from humans whenever they want. These are Q-learning algorithms. Q-learnings algorithms are a type of reinforcement learning algorithms. I have yet to lay my eyes on how they work. But, this seems to be a good place to get to know them – http://mnemstudio.org/path-finding-q-learning-tutorial.htm
Meet Vi – a bluetooth headphone personal assistant fitness coach
https://www.kickstarter.com/projects/1050572498/vi-the-first-true-artificial-intelligence-personal/description We had to come to this. For how long will you have a band for activity when all you need are your earphones? This brilliant product gives you beats to which you should run, it tracks all your activity, plays your favourite music, and you can take your calls too. This’ll mostly work better than all fitbits, and doesn’t cost that high either. Sadly, they’ve not hinted anywhere on the web what algorithms they’re using on the backend for Natural language processing. But, the good thing is, it can be bought now. Another hope that this raises is, of the day where you won’t need a smartphone because of earphones, lenses and Augmented reality. It is not that far away.
Recurrent Neural Networks
Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence.
Traditional neural networks can’t do this, and it seems like a major shortcoming. Recurrent NNs are neural networks which have loops in them. The traditional neural networks (perceptrons, sigmoid neurons, etc) are feed-forward networks. You give them parameters and weights, they’re gonna give you outputs. You change parameters and/or weights by a certain margin, the output will differ by a certain margin and so on. It’s all going left-to-right. Recurrent neural networks, however, take back output(s) as an input(s). A great introduction to RNN is this article – One such recurrent neural network that recently got famous is Long Short Term Memory network (henceforth called LSTM).
The LSTM is widely used because of its generally successful performance. It is specially used in cases where data is sequential, say based on time? Or used in Text to speech sort of problems. This reminds me, it is the building block behind the latest Google Translate engine. Google Translate had recently re-written their translate engine and found it to have achieved something they didn’t even expect. It translated between two languages for which no flow was written, which can be (exaggerated) and hypothesized as their translate engine creating its own metadata language which could translate from any language to any other. This was the latest shock given by LSTM. More details here – http://www.inc.com/justin-bariso/the-ai-behind-google-translate-recently-did-something-extraordinary.html
Another recent one was when it created a screenplay on its own, for a sci-fi movie. You can see that movie and read more about it here – http://arstechnica.com/the-multiverse/2016/06/an-ai-wrote-this-movie-and-its-strangely-moving/
The sort of creative things LSTM does is out of bounds. There’s a freakin’ app which suggests which emoji to use when you’re conversing with another human in any messaging app. There’s a whole article on how they’ve built it, why neural networks were used, and why again it comes to LSTM for them. Check it out here – http://getdango.com/emoji-and-deep-learning/
By now, you probably have some surface idea about RNNs and more specifically LSTMs. So, this is the time to build our own chatbot. And I think nothing other than this video does it best – https://www.youtube.com/watch?v=5_SAroSvC0E&feature=youtu.be
It’s short, it talks about the various ways to solve chatbot problems (in turn making you realize how difficult it must’ve been to build Siri or Google Assistant), and ultimately tells you how to build a Chatbot using Torch and python.
That’s all for today. More coming soon. 🙂