What is Deep Learning?
Neural networks are a type of machine learning models, they have been around for at least 50 years, The fundamental unit of a neural network is a node, which is loosely based on the biological neuron in the mammalian brain, The connections between neurons are also modeled on biological brains, as is the way these connections develop over time (with “training”).
Deep Learning Methods
Neural networks are a type of machine learning models, Deep learning can be defined as neural networks with a large number of parameters and layers in one of four fundamental network architectures, Deep Learning Methods such as Back-Propagation, Stochastic Gradient Descent, Learning Rate Decay, Dropout, Max Pooling, Batch Normalization, Long Short-Term Memory, Skip-gram, Continuous Bag Of Words, Transfer Learning.
Back-Propagation can compute partial derivatives (or gradient) of the function, that has the form as a function composition (as in Neural Nets), When you solve the optimization problem using the gradient-based method, you want to compute the function gradient at each iteration, For a Neural Nets, the objective function has the form of a composition.
How does deep learning attain such impressive results?
There are 2 common ways to do it: Analytic differentiation, You know the form of the function, You compute the derivatives using the chain rule (basic calculus), Approximate differentiation using finite difference which is expensive, compared to analytic differentiation, Finite difference is used to validate a back-prop implementation when debugging.
Gradient Descent is prone to be stuck in local minimum, depending on the nature of the terrain (or function in ML terms), But, when you have a special kind of mountain terrain (which is shaped like a bowl, in ML terms this is called a Convex Function), the algorithm is guaranteed to find the optimum, You can visualize this picturing a river again.
Source: Deep learning definition, algorithms, models, applications & advantages