Hello Guys, hope you are fine. In this tutorial, we are going to understand Logistic regression from the ground level and take our understanding to next level. Logistic regression is the very first algorithm that people strike through while start learning classification algorithms. Apart from this Logistic regression is a building block of complete deep learning and if you understand logistic regression well your deep learning base will be strong, and it is very easy to learn.

**Table Of Contents**

- Approach to Learn Logistic Regression
- What actually is Logistic Regression?
- Perceptron trick for solving Logistic Regression
- How does Transformation happen?
- Unrevealing the Algorithm
- Simplification
- Implementation of perception trick with Python
- Problem with Perceptron Trick
- Why we study the Perceptron trick?
- Conclusion

**Approach to learn Logistic Regression**

There is a huge number of articles and videos already available on the internet regarding Logistic Regression. If you search you will find two approaches with which people teach and understand Logistic regression. The first is geometric intuition and the second is the probability approach.

We will understand with the Probability method because Probability intuition will present a clear picture of everything which will easy to understand in detail and then go for geometric intuition.

**What Logistic Regression Does?**

Logistic regression is a classification algorithm and used for binary classification tasks. Logistic regression does the same work as Liner regression, It separates the two classes by drawing a boundary line between them or you can say it as a hyperplane in high-dimensional data.

Logistic regression only works when data is linearly separable or almost linear separable But in a non-linear relationship logistic regression will not perform because in a non-linear relationship we cannot separate the two classes just by drawing a line between them which you can better understand with the below figures.

This is linear separable data and logistic regression separates these 2 classes by drawing a boundary line between them.

**What is Perceptron Trick for solving Logistic Regression?**

Perceptron trick is very simple to understand. First take the point that if we are drawing a line to separate the two classes then the equation of line will be some as linear regression, No this is not the case in Logistic regression.

How the perception trick works is that it aims to get the separated line by applying some sort of transformations in the above equation. consider an above diagram example where using CGPA we have to see that student is placed or not. green dots represent student is placed, and blue not placed. Now how we will separate two classes with the perceptron trick.

Now the randomly drawn line we will compare against any other random point. Suppose we pick 1st point and ask it whether it is correctly classified or not, then it is correct. Then we pick the second point and ask so it is misclassified.

If misclassification occurs then we move the line forward towards the misclassified point and obtained the new point using some transformation in the equation. And thus we obtain the best-separated line after running several iterations.

**How do Transformations Happen?**

As we already know that transformation is line only happened by making some changes in the value of A, B, and C in the equation. all the three value has its own importance and affects the different portion of a line by making changes.

- By making changes in C like moves up or down parallelly.
- By making changes in A, like a shift in Y-axis where it connects vertically means vertical movement of the line.
- By making changes in B, line shift in X-axis means for horizontal movement and shifting.

Whenever you want to move in a positive direction we decrease the values and vice-versa. It means if your negative point is misclassified in a positive region then you will add value one at back or coordinates and subtract it with coefficients of line. And if any positive point misclassified in a negative region then by applying one to the coordinate system we add this with line coefficients.

**Unrevealing the Algorithm of Perceptron Trick**

Now we will dive into mathematical intuition of how transformation happens. And after understanding the algorithm, we will convert it into code and implement it practically.

That's sit. we can also write it in form of a matrix and do a dot product and as result, we will get the same equation.

You have to decide the number of iterations(Epoch) to run a loop*step-1)*from training data you will randomly select any training data point*step-2)*check that the picked point is correctly classified or not.*step-3)*If the negative point is wrongly placed in a positive region, update the coefficient using the above transformation equation and vice-versa.*step-4)*

**Simplification of the equation**

we will simplify the above algorithm where we are comparing it 2 times rather than after a few implications we only need to compare one time and automatically both the equation will be formed.

Have a look at the below figure carefully and then go to an explanation.

**Explanation -** Watch carefully the above table in the figure. These are only the four cases that occur in prediction. the above two cases are correctly classified where no change will happen and the difference is zero means the new coefficient is equal to the old coefficient.

In the last 2 cases which are the wrong predictions and if you put the value in the equation then automatically the equation of addition and subtraction will form which we are comparing above.

**Making Hands Dirty with Perceptron trick**

**Step-1) Create data**

**Step-2) Implement the perceptron function**

**Step-3) Calculate m and b**

Think over it, the line you are getting is classifying the classes correctly, It happens like this only But how logistic regression works is slightly different than this. hence perception is only a simple method to achieve this.

**Problem with Perceptron Trick**

If you observe the changes then the perception graph stops when it classifies all points, But the logistic regression line does not stop, it improves itself more and drawing the line in a symmetric way. That's why logistic regression performs better than perception trick.

**Why we have studied the Perceptron trick?**

**Conclusion**

Thank you for following the article till the end. I hope it was easy to catch the working of the perceptron algorithm for logistic regression. If you have any doubts please post them out in the comment section below. I will be happy to help you out. In the next article, we will understand complete logistic regression with Gradient descent and sigmoid function so stay tuned and keep reading.

*happy learning, keep learning*