일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
- spring boot
- hashing
- Salting
- 소프트웨어 마에스트로
- logback
- Redis
- SW 마에스트로
- traceId
- Workload
- 스프링부트
- hammerDB
- setter
- micrometer
- docket
- zset
- 동등성
- Lombok
- blue-green
- 로깅
- Kafka
- salt
- propogation
- 동일성
- Software maestro
- swagger
- Leaderboard
- SW Maestro
- OLTP
- Benchmarks
- Grafana
- Today
- Total
김태오
Gradient Descent vs. Normal Equation 본문
Gradient Descent and Normal Equation are two popular techniques used in machine learning for finding the optimal parameters of a model.
Gradient Descent is an iterative optimization algorithm that tries to minimize the cost function by adjusting the model parameters in the direction of steepest descent of the cost function. It starts with an initial guess of the model parameters and iteratively updates them until convergence is achieved. Gradient Descent is widely used in machine learning because it can handle large datasets and complex models.
On the other hand, Normal Equation is a closed-form solution that directly computes the optimal parameters that minimize the cost function. Unlike Gradient Descent, Normal Equation does not require iteration, and it can solve linear regression problems in one step. However, Normal Equation can be computationally expensive when dealing with large datasets, and it may not work for non-linear regression problems.
'ML' 카테고리의 다른 글
Feature Scaling (0) | 2023.04.23 |
---|---|
Polynomial Regression (0) | 2023.04.23 |
Learning Modes in ML (0) | 2023.04.22 |
Linear Regression (0) | 2023.04.22 |
Automatic Differentiation (0) | 2023.04.22 |