*This post is part of my #100DaysOfMLCode challenge. Learning about Machine Learning for 100days continuously. I have taken images from my specialization course which I have taken on Coursera.
*please ignore my writing skills. and also I didn’t organize the information well. This Post is about what I have understood from the videos which I have watched.
Coursera’s Deep Learning Specialization course homepage: https://www.coursera.org/learn/neural-networks-deep-learning#
Python and Vectorization
Vectorization
- Vectorization is basically a method of getting rid of explicit for-loops.
- In the deep learning era, code runs for relatively large datasets. So our code needs to perform well. Vectorization plays a key role here.
vectorized code runs faster than non-vectorized code.
- SIMD(Single Instruction Multiple Data)
More Vectorization Examples
- By avoiding explicit for loops, your code will run significantly faster.
- The rule of thumb to keep in mind is “whenever you are programming neural network or logistic regression just avoid the explicit for-loops whenever possible.”
Whenever you want to compute Vector ‘u’ as a product of matrix A and another vector ‘v’, then
-
definition of matrix multiplication
in the non-vectorized version, we need to have two for loops.
this is the vectorized version, this will eliminate the use of two- for loops.
suppose you need to apply the exponential operation on every element of a vector/matrix.
As you can see , there is a non-vectorized implementation on the left side. Where as right side contains Vectorized implementation. Vectorized implementation will allows us to eliminate the explicit for-loop.
using numpy functions we can able to apply the similar operation to all the elements of a vector without the use of explicit for loop.
we have eliminated the second for-loop from the algorithm. Instead of a for-loop we have used vector.
Vectorizing Logistic Regression
- Vectorizing Logistic Regression means implementing Logistic Regression without a single for-loop.
predictions can be calculated in a single step rather than for each individual training element.
with one line of code, we can able to calculate capital Z.
Vectorizing Logistic Regression’s Gradient Output
we have done forward propagation and backward propagation by computing the derivatives and predictions on all the training examples without using a for-loop.
Gradient Descent is updated as following.
we may need an explicit for-loop whenever we need to iterate it for multiple times.
- We can able to do a single iteration of for-loop without using a for-loop.
- Broadcasting a technique through which certain parts of the code can be made more efficient and faster.
Broadcasting in Python
Can you calculate the percentage of calories from carbs, protein, and fats without the use of explicit for-loop.
axis=0 represents the adding the sum of the elements vertically.
this is an example of broadcasting in python.
basic process of broadcasting.
General principle of broadcasting.
A Note on python/numpy vectors
whenever you create a vector, don’t use the “rank 1” type vectors.
if you are not sure about the dimensions then you can use assert statement.
- Either use column vector or row vector.
Logistic Regression Cost Function
single equation depicting Logistic Regression in two forms.
Python Basics with Numpy
- In this particular Programming exercise we will learn
- How to use Numpy
- Implement some basic core deep learning functions such as the softmax, sigmoid, dsigmoid, etc.
- Learn how to handle data by normalizing inputs and reshaping images.
- Recognize the importance of vectorization.
- Understand how python broadcasting works.
- I have learnt about building basic functions with numpy.
- The data structures we use in numpy to represent the shapes ( vectors, matrices, etc) are called numpy arrays.
- Implemented logistic sigmoid function using numpy.
- Implemented sigmoid gradient using numpy.
- Two common numpy functions used in deep learning are np.shape and np.reshape.
- Learnt how to reshape the numpy arrays.
- Implemented broadcasting and softmax function.
- How vectorization plays an important role in computation of vectors and matrices.
- Vectorization plays an important role in deep learning. It provides computational efficiency and clarity.
- Implemented the basicsigmoid, sigmoid, sigmoid_derivative, image2vector, normalize rows, softmax, L1 function and L2 function.
Logistic Regression with a Neural Network Mindset
- In this Programming assignment we are going to work with logistic regression in a way that builds intuition relevant to neural networks
- By doing this assignment you will get to know about building the general architecture of a learning algorithm
- You will get to know about initializing parameters, calculating the cost function and its gradient. Using an optimization algorithm as well
- numpy is the fundamental package for scientific computing with Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- matplotlib is a famous library to plot graphs in Python.
- PIL and scipy are used here to test your model with your own picture at the end.
- Many software bugs in deep learning come from having matrix/vector dimensions that don’t fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, …)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- “Standardize” the data
The main steps for building a Neural Network are:
- Define the model structure(such as number of input features)
- Initialize the model’s parameters.
- LOOP( calculate current loss (forward propagation), Calculate current gradient (backward propagation), update parameters(gradient descent))
Implemented the following functions by doing this exercise.
- Initializing the values of w,b.
- Optimize the loss iteratively to learn the parameters (w,b)
- Computing the cost and its gradient.
- Updating the parameters using gradient descent.
- Used the learned (w,b) to predict the labels for a given set of examples.
- Different learning rates gives different costs.
- In deep learning it is recommended to chose the learning rate which minimizes the overall cost. But if the model overfits , use other techniques to reduce overfitting.
Great job. Keep going and keep inspiring others.
LikeLike