일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- Grafana
- logback
- minreplica
- Kafka
- slow query
- SW Maestro
- MSSQL
- propogation
- 0 replica
- traceId
- 동등성
- SW 마에스트로
- spring boot
- hammerDB
- Kubernetes
- keda
- docket
- yml
- zset
- Strimzi
- Benchmarks
- blue-green
- eks
- 스프링부트
- Leaderboard
- Salting
- Software maestro
- Debezium
- Helm
- Database
- Today
- Total
김태오
CE Loss 본문
Cross-Entropy (CE) loss: A cost function commonly used to evaluate the performance of classification models, particularly for probabilistic outputs.
Calculation: CE loss measures the dissimilarity between the predicted probability distribution (y_pred) and the true probability distribution (y_true) for each class.
Probabilistic Models: Particularly suitable for models that output probabilities, such as neural networks with softmax activation in the output layer.
Minimization: The goal of training a model is to minimize the CE loss, which results in better alignment between predicted probabilities and true class labels.
Interpretation: Lower CE loss values indicate better model performance, with a CE loss of 0 representing a perfect match between predicted and true probability distributions.
CE Loss Equation
'ML' 카테고리의 다른 글
Decision Boundary (0) | 2023.04.25 |
---|---|
Perceptron (0) | 2023.04.23 |
MSE (0) | 2023.04.23 |
Classification vs. Regression (0) | 2023.04.23 |
Extending Linear Regression for Binary Classification (0) | 2023.04.23 |