Showing posts with label ANALYTICS. Show all posts
Showing posts with label ANALYTICS. Show all posts

Tuesday, July 14, 2020

Logistic Regression

SMART SUBU


 

Logistic Regression


It is important to understand the different machine learning algorithms. Keeping abreast with the understanding of the algorithm can help the data science enthusiast understand their problem, their data set and the application procedure to derive the intended results.

We will start our discussion with Logistic Regression

Logistic regression is one of the popular mathematical modelling procedure which is used in many of the data analysis algorithms. It is basically a regression analysis where the value of the outcome variable (dependent variable) is restricted between 0 and 1. In order to achieve this, a logistic function is used which is the mathematical function on which the logistic model is based. The beauty of the logistic model is that the desired output value can be easily truncated within the comprehendible range of 0 and 1 irrespective of the range of output value.

If you are interested in how to apply logistic regression and the mathematical intuition behind it, you can express your interest in joining the data science boot camp for free by email to

Monday, July 13, 2020

K Nearest Neighbours

SMART SUBU
K Nearest Neighbours


It is important to understand the different machine learning algorithm. We have already discussed Logistic regression (if you want to know about logistic regression, please

We will discuss K Nearest Neighbours

In the arena of classification algorithm, K Nearest Neighbours or KNN as known popularly is one of the most used algorithm.  The algorithm uses classification function to classify objects at the local level deferring all the calculations until complete classification is achieved, hence, this algorithm is also known as the lazy learning algorithm.

The algorithm works by selecting the k nearest neighbours from the training samples based on proximity probability and the distance measure for the test sample and then the algorithm predicts the test sample with the major class amongst k ( as specified by the user) nearest training samples.

If you are interested in how to apply K Nearest Neighbours classification algorithm and the mathematical intuition behind it, you can express your interest in joining the data science boot camp for free by email to


Sunday, July 12, 2020

Naïve Bayes

SMART SUBU
Naïve Bayes


It is important to understand the different machine learning algorithm. We have already discussed K Nearest Neighbours classification algorithm (if you want to know about K Nearest Neighbours classification algorithm, please Click Here)

We will discuss Naïve Bayes

Based on the famous Bayes Theorem this algorithm is designed to calculate the conditional probability of an object with a feature vector which belongs to a particular class. The algorithm assumes independent occurrence of features and thus the name “Naïve”. The basic principle of Bayes theorem is used to understand the conditional probability of occurrence of features or events considering the independence of the paradigm of the features.

If you are interested in how to apply the Naïve Bayes algorithm and the mathematical intuition behind it, you can express your interest in joining the data science boot camp for free by email to

Friday, July 10, 2020

Support Vector Machine

SMART SUBU
Support Vector Machine


It is important to understand the different machine learning algorithm. We have already discussed several machine learning algorithms (if you want to know about machine learning algorithms, please

We will discuss Support Vector Machine

Support Vector Machine is one of the best classification algorithm where the datasets are represented as points. It is a supervised learning algorithm where machine learning algorithm is designed to construct the hyperplane which is expected to divide the dataset into different categories. The construction of the hyperplane is based on the principle of providing maximum margin from the different categories. Overfitting of data and better accuracy is achieved by applying Support Vector Machine in machine learning classification algorithm.

If you are interested in how to apply Support Vector Machine algorithm and the mathematical intuition behind it, you can express your interest in joining the data science boot camp for free by email to smartsubu2020@gmail.com.

Thursday, July 9, 2020

Random Forests

SMART SUBU


Random Forests

It is important to understand the different machine learning algorithm. We have already discussed about several machine learning algorithms (if you want to know about machine learning algorithms, please 

We will discuss Random Forests

As a forest contains many trees, similarly a combination of different decision tree predictors constitute the random forest. The dimension of the trees depends on the independently sampled random vectors. The decision trees in the random forest are designed to ensure the same distribution for all the trees in the forest. The popularity of random forest is due to its flexibility and ease of use in both classifications as well as regression problems.

If you are interested in how to apply the Random Forests algorithm and the mathematical intuition behind it, you can express your interest in joining the data science boot camp for free by email to