Saturday, August 11, 2012

Getting Started


My supervisor asked me to do some background reading on Bio inspired vision topics, and I hope to share some details of the documents I read up to now.

Experiments trying to find out how the brain manages to find structure from the light which focuses on the retina started a long time back. Hubel and Wiesel stated their experiments around a half a century back, and a great summary of their finding throughout the time is presented on the book Eye, Brain and Vision (Click!). It describes the human visual system from the eye to the Visual Cortex, and introduces many hypotheses that have become the foundations of many studies. If I summarize the contents in few sentences, the book describes how the human eye picks up the light from the retina, and the transformations and pre-processing which happens while the signals are being transferred to the visual cortex. It then describes the functionality of simple and complex cells (two names made up by the authors) and the basic functionality of the V1 and V2. The book gives some information regarding stereo vision and color vision as well. They did their experiments on live subjects, and I simply don't know how many monkey and cat brains they had to cut open to write this book. But this is a piece of work that will have a major impact on science, so i think it's worth it!

As I haven’t covered Artificial Neural Networks as an undergraduate course (Thanks to the bureaucracy and the cat fighting between the EE and CSE departments at my first uni) I had to do a lot of reading on ANNs as well. But since there is lot of information regarding this topic, I would not spend much time on it. Important areas identified by me are Back-propagation, Hebbian and Associative Learning, Self organized maps.

Reinforcement learning is a concept where a teacher is not present as in standard supervised learning mechanisms to initially ‘teach’ the system about the correct behavior. Instead, an ultimate goal is introduced, and the agent is given the ability to self evaluate its actions in attaining the final goal. Positive actions are rewarded so that the system evolves dynamically to go in the correct path. The work done by Sutton and Barto(Click!) provides a good introduction in to the topic. The convergence times are slow, the assumptions involved are more strict and of course the mathematics required might be a step ahead compared to classic supervised learning systems, but in a biological framework the reinforcement learning concept is more practical. There is no teacher in the brain to train the neurons by giving a 100% correct set of training samples!

One big question in bio inspired vision is how the brain comprehends structure from a set of light information. Although the brain is massive in terms of number of interconnections, it is extremely slow when compared to modern day computers. (milli-second range compared to micro/nano-second range in computers). Hubel points out that simple cells in V1(Primary visual cortex) are sensitive to structure. In the work done by Olshausen and Field they build up a theory on how the brain has adapted to understand visual stimuli. Their argument is the brain breaks down what we see into simple structures called Basis vectors and using a dictionary of basis vectors the brain can represent the images received from the eye. More importantly, Although the total dictionary size is large, a single image could be represented by a small number of bases thus reducing the overall activity required. It is similar to compressing an image using few high energy Fourier components! Thus the name ‘Sparse coding’ came into the existence.

The theory of Sparse coding is highly dependent on the Independent Component Analysis (ICA) developed some time back. ICA deals with the problem of identifying a mutually independent set of basis vectors and a specific mixing matrix for each compound signal we encounter. In other words how to set apart original source signals in a mixed signal. (see the Cocktail Party Problem (Click!). Hyvarinen has contributed a lot towards this particular field and there are many resources we can use to learn from him.
(Tutorial )
(Video lecture)

As for very general information about making machines intelligent, there is a nice book from Jeff Hawkins titled On Intelligence. This tries to address the AI issue from a different perspective than the classical analysis. It doesn’t have any complex mathematics, and would be an interesting casual read if someone is interested in the topic.

I’m barely scratching is surface and the content I find on this topic is intellectually interesting but mathematically overwhelming. Given that humans have achieved very little up to now, I care to wonder how can evolution and nature can be so innovative. (Yes, it is Evolution, and not an almighty)

No comments:

Post a Comment