Modern Deep Learning in Python

4.6
2 357 комментариев
Payment
Обучение платное
Certificate
Сертификация бесплатная
Duration
9.5 часов курса
О курсе

This course continues where my first course, Deep Learning in Python, left off. You already know how to build an artificial neural network in Python, and you have a plug-and-play script that you can use for TensorFlow. Neural networks are one of the staples of machine learning, and they are always a top contender in Kaggle contests. If you want to improve your skills with neural networks and deep learning, this is the course for you.

You already learned about backpropagation, but there were a lot of unanswered questions. How can you modify it to improve training speed? In this course you will learn about batch and stochastic gradient descent, two commonly used techniques that allow you to train on just a small sample of the data at each iteration, greatly speeding up training time.

You will also learn about momentum, which can be helpful for carrying you through local minima and prevent you from having to be too conservative with your learning rate. You will also learn about adaptive learning rate techniques like AdaGradRMSprop, and Adam which can also help speed up your training.

Because you already know about the fundamentals of neural networks, we are going to talk about more modern techniques, like dropout regularization and batch normalization, which we will implement in both TensorFlow and Theano. The course is constantly being updated and more advanced regularization techniques are coming in the near future.

In my last course, I just wanted to give you a little sneak peak at TensorFlow. In this course we are going to start from the basics so you understand exactly what's going on - what are TensorFlow variables and expressions and how can you use these building blocks to create a neural network? We are also going to look at a library that's been around much longer and is very popular for deep learning - Theano. With this library we will also examine the basic building blocks - variables, expressions, and functions - so that you can build neural networks in Theano with confidence.

Theano was the predecessor to all modern deep learning libraries today. Today, we have almost TOO MANY options. Keras, PyTorch, CNTK (Microsoft), MXNet (Amazon / Apache), etc. In this course, we cover all of these! Pick and choose the one you love best.

Because one of the main advantages of TensorFlow and Theano is the ability to use the GPU to speed up training, I will show you how to set up a GPU-instance on AWS and compare the speed of CPU vs GPU for training a deep neural network.

With all this extra speed, we are going to look at a real dataset - the famous MNIST dataset (images of handwritten digits) and compare against various benchmarks. This is THE dataset researchers look at first when they want to ask the question, "does this thing work?"

These images are important part of deep learning history and are still used for testing today. Every deep learning expert should know them well.

This course focuses on "how to build and understand", not just "how to use". Anyone can learn to use an API in 15 minutes after reading some documentation. It's not about "remembering facts", it's about "seeing for yourself" via experimentation. It will teach you how to visualize what's happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.

Suggested Prerequisites:

  • Know about gradient descent
  • Probability and statistics
  • Python coding: if/else, loops, lists, dicts, sets
  • Numpy coding: matrix and vector operations, loading a CSV file
  • Know how to write a neural network with Numpy

TIPS (for getting through the course):

  • Watch it at 2x.
  • Take handwritten notes. This will drastically increase your ability to retain the information.
  • Write down the equations. If you don't, I guarantee it will just look like gibberish.
  • Ask lots of questions on the discussion board. The more the better!
  • Realize that most exercises will take you days or weeks to complete.
  • Write code yourself, don't just sit there and look at my code.

WHAT ORDER SHOULD I TAKE YOUR COURSES IN?:

  • Check out the lecture "What order should I take your courses in?" (available in the Appendix of any of my courses, including the free Numpy course)

Программа
Introduction and Outline
Outline - what did you learn previously, and what will you learn in this course?
In the previous course you learned about softmax and backpropagation. What will you learn in this course?
Where does this course fit into your deep learning studies?
Review
Review of basic neural network concepts, downloading MNIST, and using a linear classifier on it
Review of Basic Concepts
Where to get the MNIST dataset and Establishing a Linear Benchmark
Where to get the MNIST dataset, where to put it to run the code from this course correctly. I run through util.py, which contains functions we'll be using throughout the course. I run a logistic regression benchmark to show the accuracy we should aim to beat with deep learning.
Gradient Descent: Full vs Batch vs Stochastic
Know the difference between full, batch, and stochastic gradient descent, and their advantages and disadvantages
What are full, batch, and stochastic gradient descent?
Full vs Batch vs Stochastic Gradient Descent in code
Momentum and adaptive learning rates
Know how to use momentum and adaptive learning rates to improve backpropagation
Using Momentum to Speed Up Training
How can you use momentum to speed up neural network training and get out of local minima?
Nesterov Momentum
Требования
  • Be comfortable with Python, Numpy, and Matplotlib
  • If you do not yet know about gradient descent, backprop, and softmax, take my earlier course, Deep Learning in Python, and then return to this course.
Что Вы изучите?
  • Apply momentum to backpropagation to train neural networks
  • Apply adaptive learning rate procedures like AdaGrad, RMSprop, and Adam to backpropagation to train neural networks
  • Understand the basic building blocks of Theano
  • Build a neural network in Theano
  • Understand the basic building blocks of TensorFlow
  • Build a neural network in TensorFlow
  • Build a neural network that performs well on the MNIST dataset
  • Understand the difference between full gradient descent, batch gradient descent, and stochastic gradient descent
  • Understand and implement dropout regularization in Theano and TensorFlow
  • Understand and implement batch normalization in Theano and Tensorflow
  • Write a neural network using Keras
  • Write a neural network using PyTorch
  • Write a neural network using CNTK
  • Write a neural network using MXNet
Лекторы
Lazy Programmer Inc.
Lazy Programmer Inc.
Artificial intelligence and machine learning engineer

Today, I spend most of my time as an artificial intelligence and machine learning engineer with a focus on deep learning, although I have also been known as a data scientist, big data engineer, and full stack software engineer.

I received my masters degree in computer engineering with a specialization in machine learning and pattern recognition.

Experience includes online advertising and digital media as both a data scientist (optimizing click and conversion rates) and big data engineer (building data processing pipelines). Some big data technologies I frequently use are Hadoop, Pig, Hive, MapReduce, and Spark.

I've created deep learning models to predict click-through rate and user behavior, as well as for image and signal processing and modeling text.

My work in recommendation systems has applied Reinforcement Learning and Collaborative Filtering, and we validated the results using A/B testing.

I have taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Hunter College, and The New School. 

Multiple businesses have benefitted from my web programming expertise. I do all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies I've used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases I've used MySQL, Postgres, Redis, MongoDB, and more.

Платформа
Udemy
Курсы Udemy подойдут для профессионального развития. Платформа устроена таким образом, что эксперты сами запускают курсы. Все материалы передаются в пожизненный доступ. На этой платформе можно найти курс, без преувеличений, на любую тему – начиная от тьюториала по какой-то камере и заканчивая теоретическим курсом по управлению финансовыми рисками. Язык и формат обучения устанавливается преподавателем, поэтому стоит внимательно изучить информацию о курсе перед покупкой.
116.99 $ 179.99 $
Рейтинг
4.6
1 561
637
105
40
28