Have a question?
Message sent Close
4.9
11 reviews

Machine Learning and Deep Learning Optimizers Implementation

Numerical Optimization, Machine Learning, Deep Learning, Gradient Descent
1,085 Students enrolled
  • Description
  • Curriculum
  • FAQ
  • Reviews

In this course  you will learn:

1-  How to  implement batch (vanilla) gradient descent (GD) optimizer to obtain the optimal model parameters of the single and Multi variable linear regression (LR) models.

2- How to implement mini-batch and stochastic GD for single and multi-variable LR models.

You will do this by following the guided steps represented in the attached notebook.

In addition a video series describing each step.

You will also implement the cost function, stop conditions, as well as plotting the learning curves.

You will understand the power of applying vectorize implementation of the optimizer.

This implementation will help you to solidify the concept and gain the momentum of how the optimizers work during training phase.

By the end of this course you will obtain the balance between the theoretical and practical point of view of the optimizers that is used widely in both machine learning (ML) and deep learning (DL).

In this course we will focus on the main numerical optimization concepts and techniques used in ML and DL.

Although, we apply these techniques for single and multivariable LR, the concept is the same for other ML and DL models.

We use LR here for simplification and to focus on the optimizers rather than the models.

In the subsequent practical works we will scale this vision to implement more advanced optimizers such as:

–  Momentum based GD.

– Nestrov accelerated gradient NAG.

– Adaptive gradient Adagrad.

– RmsProp.

– Adam.

– BFGS.

You will be provided by the following:

– Master numerical optimization for machine learning and deep learning in 5 days course material (slides).

– Notebooks of the guided steps you should follow.

– Notebooks of the practical works ideal solution (the implementation).

– Data files.

You should do the implementation by yourself and compare your code to the practical session solution provided in a separate notebook.

A video series explaining the solution is provided. However, do not see the solution unless you finish your own implementation.

Introduction to Single Variable Linear Regression Optimization using GD
Single Variable LR Optimization Problem Definition
Single Variable LR Optimization Practical Work 1 Solution
The Main Idea of Multivariable LR Optimization Using GD
Stochastic GD (SGD) for multi variable LR Practical Work 3 Part 2 Solution
How long do I have access to the course materials?
You can view and review the lecture materials indefinitely, like an on-demand channel.
Can I take my courses with me wherever I go?
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
4.9
11 reviews
Stars 5
9
Stars 4
2
Stars 3
0
Stars 2
0
Stars 1
0
69592
Course details
Video 7 hours
Certificate of Completion

External Links May Contain Affiliate Links read more

Join our Telegram Channel To Get Latest Notification & Course Updates!
Join Our Telegram For FREE Courses & Canva PremiumJOIN NOW