김태오

Gradient Descent vs. Normal Equation 본문

ML

Gradient Descent vs. Normal Equation

ystc1247 2023. 4. 23. 00:59

Gradient Descent and Normal Equation are two popular techniques used in machine learning for finding the optimal parameters of a model.

Gradient Descent is an iterative optimization algorithm that tries to minimize the cost function by adjusting the model parameters in the direction of steepest descent of the cost function. It starts with an initial guess of the model parameters and iteratively updates them until convergence is achieved. Gradient Descent is widely used in machine learning because it can handle large datasets and complex models.
On the other hand, Normal Equation is a closed-form solution that directly computes the optimal parameters that minimize the cost function. Unlike Gradient Descent, Normal Equation does not require iteration, and it can solve linear regression problems in one step. However, Normal Equation can be computationally expensive when dealing with large datasets, and it may not work for non-linear regression problems.

Gradient Descent and Normal Equation

'ML' 카테고리의 다른 글

Feature Scaling  (0) 2023.04.23
Polynomial Regression  (0) 2023.04.23
Learning Modes in ML  (0) 2023.04.22
Linear Regression  (0) 2023.04.22
Automatic Differentiation  (0) 2023.04.22