![]() The story includes a bit more drama than you might expect, with early AI pioneers Marvin Minsky and Seymour Papert convincing the community that limitations in the perceptron model would prevent neural nets from getting very far. For some excellent background on how we got from Frank Rosenblatt’s 1957 hard-wired Mark I Perceptron (pictured here) to how derivatives and backpropagation addressed the limitations of these early neural nets, see Andrey Kurenkov’s A ‘Brief’ History of Neural Nets and Deep Learning, Part 1. My title here refers to it as a “modern neural network” because while neural nets have been around since the 1950s, the use of backpropagation, a sigmoid function and the sigmoid’s derivative in Andrew’s script highlight the advances that have made neural nets so popular in machine learning today. I had to review some matrix math and look up several numpy function calls that he uses, but it was worth it. I decided to go through Andrew Trask’s A Neural Network in 11 lines of Python to really learn how every line worked, and it’s been very helpful. ![]() When you learn new technology, it’s common to hear “don’t worry about the low-level details–use the tools!” That’s a good long-term strategy, but when you learn the lower-level details of how the tools work, it gives you a fuller understanding of what they can do for you. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |