Bruce Haydon
2 min readMay 9, 2022

Cost Functions

Cost Function: The Goal of Optimization

The cost function, or loss function, is the function to be minimized (or maximized) by varying the decision variables. Many machine learning methods solve optimization problems under the surface. They tend to minimize the difference between actual and predicted outputs by adjusting the model parameters (like weights and biases for neural networks, decision rules for random forest or gradient boosting, and so on).

In a regression problem, you typically have the vectors of input variables 𝐱 = (π‘₯₁, …, π‘₯α΅£) and the actual outputs 𝑦. You want to find a model that maps 𝐱 to a predicted response 𝑓(𝐱) so that 𝑓(𝐱) is as close as possible to 𝑦. For example, you might want to predict an output such as a person’s salary given inputs like the person’s number of years at the company or level of education.

Your goal is to minimize the difference between the prediction 𝑓(𝐱) and the actual data 𝑦. This difference is called the residual.

In this type of problem, you want to minimize the sum of squared residuals (SSR), where SSR = Ξ£α΅’(𝑦ᡒ βˆ’ 𝑓(𝐱ᡒ))Β² for all observations 𝑖 = 1, …, 𝑛, where 𝑛 is the total number of observations. Alternatively, you could use the mean squared error (MSE = SSR / 𝑛) instead of SSR.

Both SSR and MSE use the square of the difference between the actual and predicted outputs. The lower the difference, the more accurate the prediction. A difference of zero indicates that the prediction is equal to the actual data.

SSR or MSE is minimized by adjusting the model parameters. For example, in linear regression, you want to find the function 𝑓(𝐱) = 𝑏₀ + 𝑏₁π‘₯₁ + β‹― + 𝑏ᡣπ‘₯α΅£, so you need to determine the weights 𝑏₀, 𝑏₁, …, 𝑏ᡣ that minimize SSR or MSE.

In a classification problem, the outputs 𝑦 are categorical, often either 0 or 1. For example, you might try to predict whether an email is spam or not. In the case of binary outputs, it’s convenient to minimize the cross-entropy function that also depends on the actual outputs 𝑦ᡒ and the corresponding predictions 𝑝(𝐱ᡒ):

In logistic regression, which is often used to solve classification problems, the functions 𝑝(𝐱) and 𝑓(𝐱) are defined as the following:

Again, you need to find the weights 𝑏₀, 𝑏₁, …, 𝑏ᡣ, but this time they should minimize the cross-entropy function.

(DRAFT β€” Bruce Haydon v1.3.1)

Bruce Haydon
Bruce Haydon

Written by Bruce Haydon

Global Banking: New York, New York

Responses (1)