<![CDATA[ProgrammingDroid ]]>https://programmingdroid.comRSS for NodeFri, 04 Oct 2024 04:41:22 GMT60<![CDATA[Guide for Over-fitting vs Under-fitting]]>https://programmingdroid.com/guide-for-over-fitting-vs-under-fittinghttps://programmingdroid.com/guide-for-over-fitting-vs-under-fittingWed, 11 Nov 2020 12:52:41 GMT<![CDATA[<p>Hi everyone!! This is my first article on this website, hope it helps you all! :)</p><p>These Over-fitting and Under-fitting terms are quite common among people who are in Machine Learning and Data Science filed. In this article, we will look into these two terminology and few more terms to understand it better.</p><h3 id="what-is-over-fitting">What is Over-fitting?</h3><p>Over-fitting can be described as when the model has high variance and low bias.</p><h3 id="what-is-under-fitting">What is Under-fitting?</h3><p>Under-fitting can be described as when the model has high bias and low variance.</p><h4 id="oh-ho-what-is-this-bias-and-variance-now">Oh ho, what is this Bias and Variance now?</h4><ul><li><strong>Bias</strong>: In simple language, understand this as when our model has a very <strong>simple </strong>assumption of data.</li><li><strong>Variance</strong>: In contrast to bias, variance when our model is too <strong>complex </strong> on training data. </li></ul><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1604900578359/9Iuplhrpe.png" alt="image.png" /></p><p>As we can see in the above image, first one example of High bias(underfitting) and last one is an example of High variance(over-fitting).</p><h4 id="okay-now-tell-us-how-would-we-know-our-model-is-over-fittinghigh-bias-and-underfittinghigh-variance">Okay, now tell us how would we know our model is over-fitting(high bias) and underfitting(high variance)?</h4><p>One way is using error in model predications.</p><ul><li><p><strong>For the case of Under-fitting</strong>: We have high error in training data as well as testing data.</p></li><li><p><strong>In case of Over-fitting</strong>: We have less error on training data but high error testing data.</p></li></ul><p>Now, as we know all the required terms, let's conclude and define Under-fitting and Over-fitting again.</p><h3 id="conclusion">Conclusion:</h3><p>A model is said to be under-fit when the model has high bias and less variance, which can also be verified if the model gives high error on both training and test dataset.</p><p>On the other hand, a model is said to be an over-fit model if it has high variance and low bias, for verification, over-fit model has high accuracy(less error) on training data whereas high error on test data.</p><h5 id="author-satyampdhttpswwwkagglecomsatyampd">Author: <a target="_blank" href="https://www.kaggle.com/satyampd">Satyampd</a></h5>]]><![CDATA[<p>Hi everyone!! This is my first article on this website, hope it helps you all! :)</p><p>These Over-fitting and Under-fitting terms are quite common among people who are in Machine Learning and Data Science filed. In this article, we will look into these two terminology and few more terms to understand it better.</p><h3 id="what-is-over-fitting">What is Over-fitting?</h3><p>Over-fitting can be described as when the model has high variance and low bias.</p><h3 id="what-is-under-fitting">What is Under-fitting?</h3><p>Under-fitting can be described as when the model has high bias and low variance.</p><h4 id="oh-ho-what-is-this-bias-and-variance-now">Oh ho, what is this Bias and Variance now?</h4><ul><li><strong>Bias</strong>: In simple language, understand this as when our model has a very <strong>simple </strong>assumption of data.</li><li><strong>Variance</strong>: In contrast to bias, variance when our model is too <strong>complex </strong> on training data. </li></ul><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1604900578359/9Iuplhrpe.png" alt="image.png" /></p><p>As we can see in the above image, first one example of High bias(underfitting) and last one is an example of High variance(over-fitting).</p><h4 id="okay-now-tell-us-how-would-we-know-our-model-is-over-fittinghigh-bias-and-underfittinghigh-variance">Okay, now tell us how would we know our model is over-fitting(high bias) and underfitting(high variance)?</h4><p>One way is using error in model predications.</p><ul><li><p><strong>For the case of Under-fitting</strong>: We have high error in training data as well as testing data.</p></li><li><p><strong>In case of Over-fitting</strong>: We have less error on training data but high error testing data.</p></li></ul><p>Now, as we know all the required terms, let's conclude and define Under-fitting and Over-fitting again.</p><h3 id="conclusion">Conclusion:</h3><p>A model is said to be under-fit when the model has high bias and less variance, which can also be verified if the model gives high error on both training and test dataset.</p><p>On the other hand, a model is said to be an over-fit model if it has high variance and low bias, for verification, over-fit model has high accuracy(less error) on training data whereas high error on test data.</p><h5 id="author-satyampdhttpswwwkagglecomsatyampd">Author: <a target="_blank" href="https://www.kaggle.com/satyampd">Satyampd</a></h5>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1605099178176/i8Sp39wZX.png<![CDATA[L1 and L2 Regularization Guide: Lasso and Ridge Regression]]>https://programmingdroid.com/l1-and-l2-regularization-guide-lasso-and-ridge-regressionhttps://programmingdroid.com/l1-and-l2-regularization-guide-lasso-and-ridge-regressionWed, 11 Nov 2020 04:18:09 GMT<![CDATA[<p>This article is about Lasso Regression and Ridge Regression aka L1 and L2 regularization respectively, here we will learn and discuss <strong>L1 vs L2 Regularization Guide: Lasso and Ridge Regression.</strong></p><p>The <strong> key difference between L1 and L2 regularization is the penalty term or how weights are used</strong>, L2 is the sum of the <strong>square</strong> of the weights, while L1 is just the <strong>absolute</strong> sum of the weights, using these techniques we can to avoid over-fitting.</p><h2 id="l1-regularization-or-lasso-regression">L1 Regularization or Lasso Regression</h2><p>In L1 Regularization or Lasso Regression, the cost function is changed by L1 loss function which used to minimize the error, <strong>that is the sum of the all the absolute(mod) differences between the actual value and the predicted value.</strong></p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1605068224338/8-80c48DB.png" alt="image.png" />Cost Function for Lasso Regression</p><h2 id="l2-regularization-or-ridge-regression">L2 Regularization or Ridge Regression</h2><p>In L2 Regularization or Ridge Regression, the cost function is changed by L2 loss function which used to minimize the error, <strong>that is sum of the all the squared differences between the actual value and the predicted value</strong>.</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1605068241968/uWqgNVdjt.png" alt="image.png" />Cost function for Ridge Regression</p><h3 id="end-notes">End Notes:</h3><ol><li><p>Lasso Regression is useful for feature selection as regression is performed by removing the slopes whose value after model fitting is approaching to zero, meaning they are less important to the model.</p></li><li><p>It is very important to choose right value of lambda, otherwise model can lead to under-fit.</p></li></ol><h5 id="author-satyampdhttpswwwkagglecomsatyampd">Author: <a target="_blank" href="https://www.kaggle.com/satyampd">Satyampd</a></h5>]]><![CDATA[<p>This article is about Lasso Regression and Ridge Regression aka L1 and L2 regularization respectively, here we will learn and discuss <strong>L1 vs L2 Regularization Guide: Lasso and Ridge Regression.</strong></p><p>The <strong> key difference between L1 and L2 regularization is the penalty term or how weights are used</strong>, L2 is the sum of the <strong>square</strong> of the weights, while L1 is just the <strong>absolute</strong> sum of the weights, using these techniques we can to avoid over-fitting.</p><h2 id="l1-regularization-or-lasso-regression">L1 Regularization or Lasso Regression</h2><p>In L1 Regularization or Lasso Regression, the cost function is changed by L1 loss function which used to minimize the error, <strong>that is the sum of the all the absolute(mod) differences between the actual value and the predicted value.</strong></p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1605068224338/8-80c48DB.png" alt="image.png" />Cost Function for Lasso Regression</p><h2 id="l2-regularization-or-ridge-regression">L2 Regularization or Ridge Regression</h2><p>In L2 Regularization or Ridge Regression, the cost function is changed by L2 loss function which used to minimize the error, <strong>that is sum of the all the squared differences between the actual value and the predicted value</strong>.</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1605068241968/uWqgNVdjt.png" alt="image.png" />Cost function for Ridge Regression</p><h3 id="end-notes">End Notes:</h3><ol><li><p>Lasso Regression is useful for feature selection as regression is performed by removing the slopes whose value after model fitting is approaching to zero, meaning they are less important to the model.</p></li><li><p>It is very important to choose right value of lambda, otherwise model can lead to under-fit.</p></li></ol><h5 id="author-satyampdhttpswwwkagglecomsatyampd">Author: <a target="_blank" href="https://www.kaggle.com/satyampd">Satyampd</a></h5>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1605069266151/d29Jgx2M9.png