Advance Deep Learning & Artificial Intelligence Program

The Advance Deep Learning and Artificial Intelligence Program is a three-part program the first part covers the basic Quant and programming skills for AI. The second part starts with The Deep Learning Course designed and co-delivered by Professor Bhiksha Raj, Carnegie Mellon University which is exactly the same as this official CMU Course. It is a unique opportunity for student/ working professional to attend the same program in person in India with the same content, assignment, and TAs elected and trained by Professor Bhiksha Raj. In the Final part of the program we teach you three branches of AI i.e NLP, Vision and Speech and cover their application in detail, you will learn how to build a conversational system, Image analysis in pathology databases and Building Question Answering Systems etc.

Program Duration: 6 months

The Outcome

After successfully completing the program you will be comfortable designing, implementing, and communicating the results of an Applied AI project, including knowledge of Advance Deep Learning and their applications in Speech Recognition, Computer Vision, Image Classification. You will be developing end to end models in these areas and submitting it for evaluation as they hold a key part of your grading system.

The Details

Certificate of Excellence in Data Science and Machine Learning runs for 24 weeks and is subdivided into multiple courses.

  • Includes 500 hours of in-class instruction and hands-on sessions, 360 Hours of in-person classes and 140 hours of webcast classes with TA
  • Four 4-day in-person immersive sessions and four 2-day in-person immersive sessions will be held in the program
  • In-person classes will be held one day every weekend
  • Webcast classes will be held for 4 hours on weekdays

Applied Deep Learning & Artificial Intelligence Program Curriculum

Application Projects

Phase I

  • Computer Vision (auto-colorization of b/w images)
  • Video Summarization
  • Attention Models and End to End Language Modelling
  • Speech Recognition

Phase II

  • Medical Image Analysis
  • Spam Detector
  • Tracking suspicious movement for Security
  • Q & A Systems

Phase III

  • Conversational Systems
  • Stock Broking with Deep Q Learning
  • Recommender Systems with Deep Learning
  • Detect command and Control (C&C) Centre using ML and DL models

Advance Deep Learning & Artificial Intelligence Program Curriculum

Course II: Natural Language Processing and Text Mining

Application Projects

Natural Language Processing Projects

  • Spam Detector
  • Dialogue Systems
  • News Recommendation
  • Q & A Systems
  • Sentiment Analysis
  • Machine translation
  • Text Summarization

Speech Recognition Projects

  • Building end to end Speech Models
  • Classical Automatic Speech Recognition System
  • Modern Automatic Speech Recognition System
  • Conversational Systems
  • Building ASR on mobile devices and IOT devices

Computer Vision Projects

  • Face Recognition, Age, Gender Detection
  • Emotion Recognition
  • IRIS Recognition, Fingerprint Recognition
  • Large Scale Object Detection & Classification
  • Scene Understanding
  • Autonomous Vehicle Vision System
  • Medical Image Analysis — Pathology
  • Tracking Suspicious movement for security

Professor Bhiksha Raj, Fellow IEEE

Language Technologies Institute, School of Computer Science, Carnegie Mellon University

Professor Bhiksha Raj is an expert in the area of Deep Learning and Speech Recognition and has two decades of experience. He has been named to the 2017 class of IEEE fellows for his "contributions to speech recognition," according to IEEE. He is the main Instructor of 11-785, Carnegie Mellon University’s Official Deep Learning Course, which is followed by thousand of researchers worldwide.

Read More

Dr. Sarabjot Singh Anand

Co-Founder and Chief Data Scientist at Tatras Data

Dr. Sarabjot Singh Anand is a Data Geek. He has been involved in the field of data mining since the early 1990s and has derived immense pleasure in developing algorithms, applying them to real-world problems and training a host of data analysts in the capacity of being an academic and data analytics consultant.

Read More

Dr. Vikas Agrawal

Senior Principal Data Scientist @ Oracle Analytics Cloud

Vikas Agrawal works as a Senior Principal Data Scientist in Cognitive Computing for Oracle Analytics Cloud. His current interests are in automated discovery, adaptive anomaly detection in streaming data, intelligent context-aware systems, and explaining black-box model predictions.

Read More

Mr. Mukesh Jain

Analytics, AI, ML & DL Leader (ex-Microsoft, ex-Jio)

Mukesh Jain is Practitioner of Analytics, AI, ML & DL Leader since 1995.

He is Technologist, Techno-Biz Leader, Data Scientist, Author, Coach and Teacher.

Read More

Professor Joao Gama

University of Porto, Director LIAAD

Joao Gama is Associate Professor of the Faculty of Economy, University of Porto. He is a researcher and the Director of LIAAD, a group belonging to INESC TEC. He got the PhD degree from the University of Porto, in 2000. He has worked in projects and authored papers in areas related to machine learning, data streams, and adaptive learning systems and is a member of the editorial board of international journals in his area of expertise.

Read More

Professor Ashish Ghosh

Indian Institute of Statistics, Kolkata

Professor Ashish Ghosh Professor in the Machine Intelligence Unit, Indian Statistical Institute, Calcutta. Short Biography: Ashish Ghosh is a Professor of the Machine Intelligence Unit and the In-charge of Center for Soft Computing Research at the Indian Statistical Institute, Calcutta.

Read More

Professor Jaime Carbonell

Director LTI, Carnegie Mellon University, USA

Jaime is Director and Founder of the Language Technologies Institute and Allen Newell Professor of Computer Science at Carnegie Mellon University. He is a world-renowned expert in the areas of information retrieval, data mining and machine translation. Jaime co-founded and took public Carnegie Group, a company in the IT services market employing advanced artificial-intelligence techniques.

Read More

Dr. Derick Jose

Co-founder, Flutura Decision Sciences & Analytics

Derick is the co-founder of Flutura Decision Sciences a niche AI & IIoT company focussed on impacting outcomes for the Engineering and Energy Industries. Flutura has been rated by Bloomberg as one of the fastest growing machine intelligence companies and its AI platform Cerebra has been certified to work with Halliburton and Hitachis platforms.

Read More

Mr. Joy Mustafi, Director and Principal Researcher at Salesforce

Visiting Scientists , Innosential

Winner of Zinnov Award 2017 - Technical Role Model - Emerging Technologies (Senior Level). Collaborated with the ecosystem by visiting around twenty-five leading universities in India as visiting faculty, guest speaker, advisor, mentor, project supervisor, panelist, academic board member, curricula moderator, paper setter and evaluator, judge of events like hackathon etc. Having more than twenty-five patents and fifteen publications on artificial intelligence in the recent past years.

Read More

Dr. Vijay Gabale

Co-founder and CTO Infilect

Deep learning enabled computer vision forms the core competence of Infilect products. Prior to cofounding Infilect, Vijay was a research scientist with IBM research. Vijay obtained his Ph.D. in Computer Science from IIT Bombay in 2012. Vijay has extensively worked on intelligent networks and systems by applying machine learning and deep learning techniques. Vijay has published research papers in top-tier conferences such as SIGCOMM, KDD and has several patents to his name.

Read More

Mr. Dipanjan Sarkar

Intel AI

Dipanjan (DJ) holds a master of technology degree with specializations in Data Science and Software Engineering. He is also an avid supporter of self-learning and massive open online courses. He plans to venture soon into the world of open-source products to improve the productivity of developers across the world.

Read More

Mr. Ajit Jaokar

Director of the Data Science Program University of Oxford

Ajit Jaokar's work is based on identifying and researching cross-domain technology trends in Telecoms, Mobile and the Internet.

Ajit conducts a course at Oxford University on Big Data and Telecoms and also teaches at City Sciences(Technical University of Madrid) on Big Data Algorithms for future Cities / Internet of Things.

Read More

Dr. Pratibha Moogi

ex-Samsung R&D

Dr. Pratibha Moogi holds PhD from OGI, School of Engineering, OHSU, Portland and Masters from IIT Kanpur. She has served SRI International lab and many R&D groups including Texas Instruments, Nokia, and Samsung. Currently she is serving as a Director in Data Science Group (DSG), in a leading B2B customer operation & journey analytics company, [24]

Read More

Applied Deep Learning & Artificial Intelligence Program

Preparatory Course: Foundations of Learning AI

Read MoreBack to Curriculum

1. Mathematical Foundations of Data Science:

A. Linear Algebra:

  1. Vectors, Matrices
  2. Tensors
  3. Matrix Operations
  4. Projections
  5. Eigenvalue decomposition of a matrix
  6. LU Decomposition
  7. QR Decomposition/Factorization
  8. Symmetric Matrices
  9. Orthogonalization & Orthonormalization
  10. Real and Complex Analysis (Sets and Sequences, Topology, Metric Spaces, Single-Valued and Continuous Functions, Limits, Cauchy Kernel, Fourier Transforms)
  11. Information Theory (Entropy, Information Gain)
  12. Function Spaces and Manifolds
  13. Relational Algebra and SQL

B. Multivariate Calculus

  1. Differential and Integral Calculus
  2. Partial Derivatives
  3. Vector-Values Functions
  4. Directional Gradient
  5. Hessian
  6. Jacobian
  7. Laplacian and Lagragian Distribution

2: Probability for Data Scientists

  1. Probability Theory and Statistics
  2. Combinatorics
  3. Random Variables
  4. Probability Rules & Axioms
  5. Bayes' Theorem
  6. Variance and Expectation
  7. Conditional and Joint Distributions
  8. Standard Distributions (Bernoulli, Binomial, Multinomial, Uniform and Gaussian)
  9. Moment Generating Functions
  10. Maximum Likelihood Estimation (MLE)
  11. Prior and Posterior
  12. Maximum a Posteriori Estimation (MAP) and Sampling Methods
  13. Descriptive Statistics
  14. Hypothesis Testing
  15. Goodness of Fit
  16. Analysis of Variance
  17. Correlation
  18. Chi2 test
  19. Design of Experiments

3: Algorithms and Data Structures:

A. Graph Theory: Basic Concepts and Algorithms
B. Algorithmic Complexity

  1. Algorithm Analysis
  2. Greedy Algorithms
  3. Divide and Conquer and Dynamic Programming

C. Data Structures

  1. Array, List, Hashing, Binary Trees, Hashing, Heap, Stack etc
  2. Dynamic Programming
  3. Randomized & Sublinear Algorithm
  4. Graphs

4: R and Python

A. R Programming Language

  1. Vectors
  2. Matrices
  3. Lists
  4. Data frame
  5. Basic Syntax
  6. Basic Statistics
  7. Data Manipulation (dplyr)
  8. Visualization (ggplot2)
  9. Connecting to databases (RJDBC)

Python Programming Language

  1. Python language fundamentals
  2. Data Structures
  3. Beautiful Soup
  4. Regular Expressions
  5. JSON
  6. Restful Web Services (Flask)
  7. NumPy
  8. Plots in matplotlib, seaborn
  9. Pandas

Course I: Introduction to AI & Nature of Intelligence

Read MoreBack to Curriculum

In this course, DankoNikolic brain scientists and AI inventor explains the fundamental of intelligence needed for everyone interested in creating ambitious AI solutions. What you do when things get tough? In the course, you will learn the differences between machine intelligence and human intelligence. You will understand why AI fails and when. AI does not have a narrowly limited working memory (a.k.a., short-term memory) but we humans do. How does our working memory make us more intelligent than machines? Why do we understand the world and machines don't? You will also learn fundamental theorems for machine learning and see how they apply to machine intelligence and human intelligence. After having learned that, you will be able to judge whether an ML project is too ambitious or is likely to succeed. You will be able to identify fundamental problems that plagued some of the ambitious AI projects in the past. You will understand why it is nearly impossible for machines to reach human levels of intelligence. Also, you will learn why some of the tricks in machine learning sometimes work and other times not. You will understand why it is so difficult to build self-driving cars.

The course offers fundamentals that you cannot find in any other course or a book. These fundamentals will be invaluable for your future work on ML and AI.

  1. This course explores the nature of intelligence, ranging from machines to the biological brain. Information provided in the course is useful when undergoing ambitious projects in machine learning and AI. It will help you avoid pitfalls in those projects.
  2. What are the differences between the real brain and machine intelligence and how can you use this knowledge to prevent failures in your work? What are the limits of today's AI technology? How to assess early in your AI project whether it has chances of success?
  3. What are the most fundamental mathematical theorems in machine learning and how they are relevant for your everyday work?

Course II: Introduction to Deep Learning

Read MoreBack to Curriculum


  1. Introduction to deep learning
  2. Course logistics
  3. History and cognitive basis of neural computation
  4. The perceptron / multi-layer perceptron


  1. The neural net as a universal approximator


  1. Training a neural network
  2. Perceptron learning rule
  3. Empirical Risk Minimization
  4. Optimization by gradient descent


  1. Single hidden layered networks and universal approximation
  2. Motivation for more than 1 hidden layer
  3. Feature engineering versus co-learning features and estimation


  1. Back propagation
  2. Calculus of back propagation


  1. Convergence in neural networks
  2. Rates of convergence
  3. Loss surfaces, Learning rates, and optimization methods
  4. RMSProp, Adagrad, Momentum


  1. Stochastic gradient descent
  2. Acceleration
  3. Overfitting and regularization
  4. Tricks of the trade:
  5. Choosing a divergence (loss) function
  6. Batch normalization
  7. Dropout


  1. Recap of Training Q&A session for students


  1. Optimization continued

Course III: Core Deep Learning

Read MoreBack to Curriculum


  1. Models of vision
  2. Neocognitron
  3. Mathematical details of CNNs, Alexnet, Inception, VGG


  1. Architecture
  2. Convolution, Pooling, Normalization
  3. Training strategies
  4. Visualizing and understanding convolutional networks
  5. Some MORE well-known convolution networks (LeNet/ZFNet/GoogleNet)

UNIT 10:

  1. Recurrent Neural Networks (RNNs)
  2. Modeling series
  3. Back propogation through time
  4. Bidirectional RNNs

UNIT 11:

  1. Stability
  2. Exploding/vanishing gradients
  3. Long Short-Term Memory Units (LSTMs) and variants
  4. Resnets

UNIT 12:

  1. Loss functions for recurrent networks
  2. Sequence Prediction

UNIT 13:

  1. Sequence To Sequence Methods
  2. Connectionist Temporal Classification (CTC)

UNIT 14:

  1. Sequence-to-sequence models
  2. Attention models, examples from speech and language

UNIT 15:

  1. What to networks represent
  2. Autoencoders and dimensionality reduction
  3. Learning representations

UNIT 16:

  1. Variational Autoencoders (VAEs)

UNIT 17:

  1. Generative Adversarial Networks (GANs) Part 1

UNIT 18:

  1. Generative Adversarial Networks (GANs) Part 2

Course IV: Advance Deep Learning Models

Read MoreBack to Curriculum

UNIT 19:

  1. Regularization

UNIT 20:

  1. Transfer Learning

UNIT 21:

  1. Hopfield Networks
  2. Boltzmann Machines

UNIT 22:

  1. Training Hopfield Networks
  2. Stochastic Hopfield Networks

UNIT 23:

  1. Restricted Boltzmann Machines
  2. Deep Boltzmann Machines

UNIT 24:

  1. Reinforcement Learning 1

UNIT 25:

  1. Reinforcement Learning 2

UNIT 26:

  1. Reinforcement Learning 3

UNIT 27:

  1. Reinforcement Learning 4

UNIT 28:

  1. Q Learning Deep Q Learning
  1. Case Study:Computer Vision (auto-colorization of b/w images; attention)
  2. Case Study:Natural Language Processing (caption generation, Word2Vec)

Course V: Artificial Intelligence & it's applications

Read MoreBack to Curriculum

  1. Image Processing:
    • Projects: Medical image analysis -- pathology, tracking suspicious movement for security.
  2. Natural Language Processing:
    • Projects: Spam detector, Q&A/Conversational systems.
  3. Speech:
    • Project: Conversational systems.
  4. Deep Learning in Cyber Security

Advance Deep Learning & Artificial Intelligence Program

Preparatory Course I: Foundations of Signal, Speech and Image Processing

Read MoreBack to Curriculum

1. Signal Processing

  • Fourier transform, Short-time Fourier Transform
  • Filtering

2. Speech Processing

  • Mel-frequency cepstral coefficients
  • Perceptual linear prediction
  • Probabilistic linear discriminant analysis

3. Image processing

  • Edge detecting and linking
  • Texture
  • Morphological features
  • Scale-invariant feature transform
  • Histogram of Gaussians f. Color spaces

Preparatory Course II: Statistics of Natural Language Processing

Read MoreBack to Curriculum

  1. Statistical Language Modeling
  2. Computational Linguistics
  3. Statistical Decision Making and the Source-Channel Paradigm
  4. Sparseness; Smoothing
  5. Measuring Success: Information Theory, Entropy and Perplexity Maximum Entropy Models, Whole-Sentence Models, Semantic Modeling
  6. EM for sound separation
  7. Probabilistic Context Free Grammars (PCFG), the Inside-Outside Algorithm
  8. Syntactic Language Models
  9. Decision Tree Language Models

Course I: Neural Network for Natural Language Processing

Read MoreBack to Curriculum

1. Introduction

  • Introduction to Neural Networks
  • Example Tasks and Their Difficulties
  • What Neural Nets Can Do To Help

2. Predicting the Next Word in a Sentence

  • Computational Graphs
  • Feed-forward Neural Network Language Models
  • Measuring Model Performance: Likelihood and Perplexity

3. Distributional Semantics and Word Vectors

  • Describing a word by the company that it keeps
  • Counting and predicting
  • Skip-grams and CBOW
  • Evaluating/Visualizing Word Vectors
  • Advanced Methods for Word Vectors

4. Why is word2vec So Fast?: Speed Tricks for Neural Nets

  • Softmax Approximations: Negative Sampling, Hierarchical Softmax
  • Parallel Training
  • Tips for Training on GPUs

5. Convolutional Networks for Text

  • Bag of Words, Bag of n-grams, and Convolution
  • Applications of Convolution: Context Windows and Sentence Modeling
  • Stacked and Dilated Convolutions
  • Structured Convolution
  • Convolutional Models of Sentence Pairs
  • Visualization for CNNs

6. Recurrent Networks for Sentence or Language Modeling

  • Recurrent Networks
  • Vanishing Gradient and LSTMs
  • Strengths and Weaknesses of Recurrence in Sentence Modeling
  • Pre-training for RNNs

7. Using/Evaluating Sentence Representations

  • Sentence Similarity
  • Textual Entailment
  • Paraphrase Identification
  • Retrieval

8. Conditioned Generation

  • Encoder-Decoder Models
  • Conditional Generation and Search
  • Ensembling
  • Evaluation
  • Types of Data to Condition On

9. Attention

  • Attention
  • What do We Attend To?
  • Improvements to Attention
  • Specialized Attention Varieties
  1. Case Study: "Attention is All You Need"

Course II: Natural Language Processing and Text Mining

Read MoreBack to Curriculum

1. Foundations of Natural language Processing

  • Word embedding
  • Named entity recognition
  • Parts-of-Speech tagging
  • Language modeling
  • Segmentation
  • Paraphrasing
  • Machine translation
  • Information Extraction
  • Text Summarization
  • Conditional Random Fields
  • Dimensionality Reduction: Matrix Factorization, Topic Models

2. Text Classification

  • Tokenization
  • Lemmatization
  • Vectorization
  • Bag of Words representation
  • Language Models
  • Tfidf
  • Singular Value Decomposition
  • Topic Models
  • Discourse Modelling
  • Coreference Resolution
  • Question Answering Systems
  • Visualizing complex and high dimensional data
  • Sentiment Analysis
  1. Case Study: (Spam detector, Q&A, Conversational systems)

Course III: Speech Recognition

Read MoreBack to Curriculum

  1. Acoustic modeling
  2. Highway deep neural network for low resource acoustic models
  3. Temporal classification
  4. Concentrating information in time
  5. Speech recognition
  6. Speaker invariant recognition
  7. Robustness to noise
  8. Speech synthesis
  1. Case Study: Conversational Systems

Course IV: Applied Deep Learning III. Computer Vision and Image Classification

Read MoreBack to Curriculum

  • Object detection
    • Selective search
    • R-CNN
    • Fast R-CNN
    • YOLO
  • Video analysis and object tracking
  • Face Recognition
  • Emotional Recognition
  1. Case Study: Medical image analysis -- Pathology
  2. Case Study: Tracking Suspicious movement for security

Our Address :

2/3, 2nd Floor, 80 Feet Road, Barleyz Junction, Sony World Crossing, Above, KFC, Koramangala, Venkappa Garden, Ejipura, Bengaluru, Karnataka 560034

Phone Number :

+91 9582510786

Email Address :

    Enquire Now