Richard Feynman, winner of the 1965 Nobel Prize in Physics and world renown “curious character,” gives us an insightful 1985 lecture about computer heuristics: how computers work, how they file information, how they handle data, how they use their information in allocated processing in a finite amount of time to solve problems and how they actually compute values of interest to human beings. These topics are essential in the study of what processes reduce the amount of work done in solving a particular problem in computers, giving them speeds of solving problems that can outmatch humans in certain fields but which have not yet reached the complexity of human driven intelligence.
The question if human thought is a series of fixed processes that could be, in principle, imitated by a computer is a major theme of this lecture and, in Feynman’s trademark style of teaching, gives us clear and yet very powerful answers for this field which has gone on to consume so much of our lives today. No doubt this lecture will be of crucial interest to anyone who has ever wondered about the process of human or machine thinking and if a synthesis between the two can be made without violating logic.
I’ve been a Feynman groupie for many years, having consumed just about every form of Feynman memorabilia available. So I was delighted to find this lecture which closely matches my interest area of machine learning. I see this discussion of computer heuristics to be a precursor to what we call machine learning today. Enjoy!
Sign up for the free insideBIGDATA newsletter.