Hinton graduated from Cambridge in 1970, with a Bachelor of Arts in Experimental Psychology, and from Edinburgh in 1978, with a PhD in Artificial Intelligence. He has worked at Sussex, UCSD, Cambridge, Carnegie Mellon University and University College London. He was the founding director of the Gatsby Computational Neuroscience Unit at University College London, and is currently a professor in the computer science department at the University of Toronto. He holds a Canada Research Chair in Machine Learning. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research.
An accessible introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm for training multi-layer neural networks that has been widely used for practical applications. He coinvented Boltzmann machines with Terry Sejnowski. His other contributions to neural network research include distributed representations, time-delay neural networks, mixtures of experts, Helmholtz machines and Product of Experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.
Hinton was the first winner of the David E. Rumelhart Prize. He was elected a Fellow of the Royal Society in 1998.
Hinton was the 2005 recipient of the IJCAI Award for Research Excellence lifetime-achievement award.
He has also been awarded the 2011 Herzberg Canada Gold Medal for Science and Engineering.
Hinton is the great-great-grandson of logician George Boole whose work eventually became one of the foundations of modern computer science, and of surgeon and author James Hinton.
http://en.wikipedia.org/wiki/Geoffrey_Hinton
http://www.cs.toronto.edu/~hinton/papers.html