Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization is the second course in the new deep learning specialization offered by Coursera taught by Coursera founder Andrew Ng. The 3-week course is a direct follow up to the first course and builds upon the basics of neural networks by introducing a variety of methods to make them work better in practice. Key topics include regularization and dropout to reduce overfitting, normalization to improve gradient flow and optimization methods to speed up training time. You should complete the first course in the series before starting this course. Similar to the first course, the quizzes and programming assignments are accessible to everyone but you need to pay for the certificate to get credit for completion.
Each week of Improving Deep Neural Networks consists of roughly 75 to 120 minutes of lecture discussing tips and techniques you can use to make neural networks work better. At the start of each lecture, Anderw Ng appears on screen and gives a brief overview of the topics that will be covered, but most of the content consists of slides and handwritten information backed by voice-overs. There is a lot of notation on the screen at times, but if you made it through the first course, you will be well positioned to get through this one as well and the variety of topics covered each week keeps things interesting. The video production value and style feel a bit dated for a brand new course, but the teaching quality itself is excellent.
Improving Deep Neural Networks has 5 programming assignments that cover weight initialization, regularization, gradient checking, optimization methods and the TensorFlow package. The programming assignments have comprehensive instructions and code skeletons that only require you to write specific key lines of code to finish implementing the various methods that they cover. As a result the programming assignments move along quite quickly and it allows you to focus on understanding how different techniques affect the networks you make rather than spending most of your effort structuring code, aligning matrix dimensions and tracking down bugs. The final assignment gives a nice introduction to TensorFlow which is likely a building block for future courses on convolutional and recurrent nets.
Improving Deep Neural Networks is a great course that builds upon the foundation laid in the first course in the deep learning specialization by introducing many methods that can improve network performance. It was a wise decision to separate the basics of neural nets covered in the first course from the supplementary methods covered in this course, as introducing too much complexity too soon would just make it harder to understand the basics. This course does have some faults, such as occasionally cluttered lecture slides and skipping a few lecture topics in the programming assignments, but they are mostly minor nitpicks.
I give Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization 4.5 stars out of 5 stars: Great.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.