Cost Functions
Cost Function: The Goal of Optimization
The cost function, or loss function, is the function to be minimized (or maximized) by varying the decision variables. Many machine learning methods solve optimization problems under the surface. They tend to minimize the difference between actual and predicted outputs by adjusting the model parameters (like weights and biases for neural networks, decision rules for random forest or gradient boosting, and so on).
In a regression problem, you typically have the vectors of input variables π± = (π₯β, β¦, π₯α΅£) and the actual outputs π¦. You want to find a model that maps π± to a predicted response π(π±) so that π(π±) is as close as possible to π¦. For example, you might want to predict an output such as a personβs salary given inputs like the personβs number of years at the company or level of education.
Your goal is to minimize the difference between the prediction π(π±) and the actual data π¦. This difference is called the residual.
In this type of problem, you want to minimize the sum of squared residuals (SSR), where SSR = Ξ£α΅’(π¦α΅’ β π(π±α΅’))Β² for all observations π = 1, β¦, π, where π is the total number of observations. Alternatively, you could use the mean squared error (MSE = SSR / π) instead of SSR.
Both SSR and MSE use the square of the difference between the actual and predicted outputs. The lower the difference, the more accurate the prediction. A difference of zero indicates that the prediction is equal to the actual data.
SSR or MSE is minimized by adjusting the model parameters. For example, in linear regression, you want to find the function π(π±) = πβ + πβπ₯β + β― + πα΅£π₯α΅£, so you need to determine the weights πβ, πβ, β¦, πα΅£ that minimize SSR or MSE.
In a classification problem, the outputs π¦ are categorical, often either 0 or 1. For example, you might try to predict whether an email is spam or not. In the case of binary outputs, itβs convenient to minimize the cross-entropy function that also depends on the actual outputs π¦α΅’ and the corresponding predictions π(π±α΅’):
In logistic regression, which is often used to solve classification problems, the functions π(π±) and π(π±) are defined as the following:
Again, you need to find the weights πβ, πβ, β¦, πα΅£, but this time they should minimize the cross-entropy function.
(DRAFT β Bruce Haydon v1.3.1)