일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |
- spring boot
- Debezium
- 스프링부트
- eks
- Benchmarks
- SW 마에스트로
- slow query
- Software maestro
- Salting
- zset
- logback
- Grafana
- propogation
- minreplica
- 0 replica
- keda
- Strimzi
- 동등성
- docket
- blue-green
- Kubernetes
- yml
- Kafka
- hammerDB
- Leaderboard
- Helm
- 소프트웨어 마에스트로
- traceId
- SW Maestro
- Database
- Today
- Total
김태오
CE Loss 본문
Cross-Entropy (CE) loss: A cost function commonly used to evaluate the performance of classification models, particularly for probabilistic outputs.
Calculation: CE loss measures the dissimilarity between the predicted probability distribution (y_pred) and the true probability distribution (y_true) for each class.
Probabilistic Models: Particularly suitable for models that output probabilities, such as neural networks with softmax activation in the output layer.
Minimization: The goal of training a model is to minimize the CE loss, which results in better alignment between predicted probabilities and true class labels.
Interpretation: Lower CE loss values indicate better model performance, with a CE loss of 0 representing a perfect match between predicted and true probability distributions.
CE Loss Equation
'ML' 카테고리의 다른 글
Decision Boundary (0) | 2023.04.25 |
---|---|
Perceptron (0) | 2023.04.23 |
MSE (0) | 2023.04.23 |
Classification vs. Regression (0) | 2023.04.23 |
Extending Linear Regression for Binary Classification (0) | 2023.04.23 |