top of page

Artificial Intelligence A to Z

Enroll
 50% OFF 
Level: Beginner
learn python programming
learn numpy data science
Pandas data science
matplotlib data viz science
seaborn data visualisation
Pre-requisites for the course : None
This course teaches everything from scratch. No coding background required. You only require a working laptop and a good internet connection.
Date: New batch will start soon
Only 30 Students Per Batch

$2499

$4999

Syllabus
Data Science Essentials
+
  • In this module, you will be taught Python programming language which is one of the most popular languages used for data science
  • You will use Python Pandas and NumPy libraries for data manipulation and analysis of data
  • Learn to visualize data with Python
  • This module will also cover the mathematics fundamentals like linear algebra, probability and statistics
  • Apply what you have learned in project

1. Introduction to data science

  • What is data science

  • What are companies looking for

  • How it helps businesses to make right the decisions

  • What tools are used

  • Installing Python and Jupyter with Anaconda

​

2. Python fundamentals - I 

  • Importing libraries

  • Variables and data types

  • Lists

  • Dictionary

​

3. Python Fundamentals - II 

  • Conditional statements

  • Functions

  • Loops

​

4. Pandas and Numpy fundamentals 

  • Why pandas and numpy

  • Importing Pandas and Numpy

  • Exploring data with pandas

  • Boolean indexing

​

5. Statistics and Probability

  • Descriptive statistics

  • Probability concepts

  • Random variables

  • Probability distribution functions

  • Central Limit Theorem (CLT)

6. Linear Algebra

  • Linear combinations

  • Vectors and Matrices

  • Matrix Decomposition

  • Eigen Vectors and Eigen Values

 

7. Data Cleaning and Analysis

  • Missing values

  • Duplicate data 

  • Working with strings

  • Grouping and combining

​

8. Data visualization in Python - I

  • Matplotlib basics

  • Line charts

  • Bar plots

  • Scatter Plots

  • Histogram and box plots

​

9. Data visualization in Python - II

  • Seaborn Basics

  • Line plots

  • Bar plots

  • Scatter plots

  • Histogram and box plots

​

10. Project

  • Problem Statement

  • Exploratory Data Analysis

  • Data Cleaning

  • Data Visualization

  • Insights and Conclusion

Machine Learning
+
  • This module will make you familiar with machine learning concepts like regression and its types
  • Popular machine learning algorithms along with the mathematical intuition required to implement these models
  • Understand how to prevent overfitting of model, handling missing data to improve model accuracy and many more model improvement techniques
  • Apply what you have learned in project

1. Linear Regression
Our course starts from the most basic regression model: Just fitting a line to data. This simple model for forming predictions from a single, univariate feature of the data is appropriately called "simple linear regression"

​

2. Multiple Regression
The next step in moving beyond simple linear regression is to consider "multiple regression" where multiple features of the data are used to form predictions. More specifically, in this module, you will learn how to build models of more complex relationships between a single variable (e.g., 'square feet') and the observed response (like 'house sales price'). 

​

3. Ridge Regression
You have examined how the performance of a model varies with increasing model complexity, and can describe the potential pitfall of complex models becoming overfit to the training data. In this module, you will explore a very simple, but extremely effective technique for automatically coping with this issue. This method is called "ridge regression".

​

4. Lasso
A fundamental machine learning task is to select amongst a set of features to include in a model. In this module, you will explore this idea in the context of multiple regression, and describe how such feature selection is important for both interpretability and efficiency of forming predictions.

​

5. Nearest Neighbors & Kernel Regression
Up to this point, we have focused on methods that fit parametric functions---like polynomials and hyperplanes---to the entire dataset. In this module, we instead turn our attention to a class of "nonparametric" methods. These methods allow the complexity of the model to increase as more data are observed, and result in fits that adapt locally to the observations. 


6. Linear Classifiers & Logistic Regression
Linear classifiers are amongst the most practical classification methods. For example, in our sentiment analysis case-study, a linear classifier associates a coefficient with the counts of each word in the sentence. In this module, you will become proficient in this type of representation. You will focus on a particularly useful type of linear classifier called logistic regression, which, in addition to allowing you to predict a class, provides a probability associated with the prediction. 


7. Learning Linear Classifiers
Once familiar with linear classifiers and logistic regression, you can now dive in and write your first learning algorithm for classification. In particular, you will use gradient ascent to learn the coefficients of your classifier from data. You first will need to define the quality metric for these tasks using an approach called maximum likelihood estimation (MLE).

​

8. Overfitting & Regularization in Logistic Regression

As we saw in the regression course, overfitting is perhaps the most significant challenge you will face as you apply machine learning approaches in practice. This challenge can be particularly significant for logistic regression, as you will discover in this module, since we not only risk getting an overly complex decision boundary, but your classifier can also become overly confident about the probabilities it predicts. In this module, you will investigate overfitting in classification in significant detail, and obtain broad practical insights from some interesting visualizations of the classifiers' outputs. 

9. Decision Trees
Along with linear classifiers, decision trees are amongst the most widely used classification techniques in the real world. This method is extremely intuitive, simple to implement and provides interpretable predictions. In this module, you will become familiar with the core decision trees representation. You will then design a simple, recursive greedy algorithm to learn decision trees from data.


10. Preventing Overfitting in Decision Trees
Out of all machine learning techniques, decision trees are amongst the most prone to overfitting. No practical implementation is possible without including approaches that mitigate this challenge. In this module, through various visualizations and investigations, you will investigate why decision trees suffer from significant overfitting problems.


11. Handling Missing Data
Real-world machine learning problems are fraught with missing data. That is, very often, some of the inputs are not observed for all data points. This challenge is very significant, happens in most cases, and needs to be addressed carefully to obtain great performance. And, this issue is rarely discussed in machine learning courses.

12. Nearest Neighbor Search
We start the course by considering a retrieval task of fetching a document similar to one someone is currently reading. We cast this problem as one of nearest neighbor search, which is a concept we have seen in the Foundations and Regression courses. However, here, you will take a deep dive into two critical components of the algorithms: the data representation and metric for measuring similarity between pairs of data points. 


13. Clustering with k-means
In clustering, our goal is to group the data points in our dataset into disjoint sets. Motivated by our document analysis case study, you will use clustering to discover thematic groups of articles by "topic". These topics are not provided in this unsupervised learning task; rather, the idea is to output such cluster labels that can be post-facto associated with known topics like "Science", "World News' ', etc. 

​

14. Case Study  I

Regression: Predicting House Prices
This week you will build your first intelligent application that makes predictions from data.<p>We will explore this idea within the context of our first case study, predicting house prices, where you will create models that predict a continuous value (price) from input features (square footage, number of bedrooms and bathrooms,...).


15. Case Study II
Classification: Analyzing Sentiment
How do you guess whether a person felt positively or negatively about an experience, just from a short review they wrote? In our second case study, analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...).This task is an example of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification.

Deep Learning
+
  • This module covers neural networks, convolutional networks and other state of the art deep learning techniques like RNNs and LSTMs
  • Model tuning, normalization techniques and other best practices to improve model performance
  • Sequence Modelling techniques which are extensively used in speech recognition, chat bots and numerous other applications

1. Introduction to Deep Learning

Be able to explain the major trends driving the rise of deep learning, and understand where and how it is applied today.

​

2. Neural Networks Basics
Learn to set up a machine learning problem with a neural network mindset.
Learn to use vectorization to speed up your models.
 

​

3. Shallow Neural Networks

Learn to build a neural network with one hidden layer, using forward propagation and backpropagation.


4. Deep Neural Networks

Understand the key computations underlying deep learning, use them to build and train deep neural networks, and apply it to computer vision.

​

5. Optimization Algorithms
We will be looking into the evolution of optimization algorithms and the state of the art algorithms like Adagrad being used today.

6. Hyperparameter tuning, Batch Normalization and Programming Frameworks

Hyper parameter tuning forms the heart of any Deep Learning or Machine learning project, hence we will be looking into techniques which will help us do that.

​

7. Convolutional Neural Network

This module will teach you how to build convolutional neural networks and apply it to image data. Thanks to deep learning, computer vision is working far better than just two years ago, and this is enabling numerous exciting applications ranging from safe autonomous driving, to accurate face recognition, to automatic reading of radiology images

8. Sequence Modelling

This module will teach you how to build models for natural language, audio, and other sequence data. Thanks to deep learning, sequence algorithms are working far better than just two years ago, and this is enabling numerous exciting applications in speech recognition, music synthesis, chatbots, machine translation, natural language understanding, and many others. We will be looking into Recurrent Neural Networks (RNNs), and commonly-used variants such as GRUs and LSTMs.

bottom of page