일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- SW Maestro
- Kafka
- 동등성
- MSSQL
- Software maestro
- SW 마에스트로
- Leaderboard
- keda
- slow query
- minreplica
- Kubernetes
- traceId
- propogation
- Benchmarks
- Helm
- logback
- zset
- 0 replica
- yml
- Debezium
- spring boot
- Grafana
- blue-green
- Strimzi
- Database
- eks
- docket
- Salting
- hammerDB
- 스프링부트
- Today
- Total
김태오
Extending Linear Regression for Binary Classification 본문
Examples of a extension )
Email -> Spam/Non-Spam
Price -> Low/High
Tumor -> Malignant/Benign
Threshold
- a value that is used to make a binary decision based on a continuous value.
- It is commonly used in binary classification problems, where the output of the model is a probability score between 0 and 1, and the threshold is used to determine whether the input belongs to one of two classes.
- The threshold value can be adjusted to control the trade-off between precision and recall, which are two important metrics for evaluating classification models.
- The choice of an appropriate threshold depends on the specific problem and the costs associated with false positives and false negatives.
Activation Function
- An activation function is a mathematical function that is applied to the output of a neural network layer to introduce nonlinearity into the model.
- It is used to determine the output of each neuron in the layer, based on the weighted sum of the inputs and the bias term.
- There are several activation functions commonly used in machine learning, including the sigmoid function, the ReLU function, and the tanh function, each with its own advantages and disadvantages.
- The choice of an appropriate activation function depends on the specific problem and the architecture of the neural network being used.
** A Binary Step Function is a main example of a hard threshold , where a Sigmoid Function is a example of a soft threshold.
** The MSE Cost is not used here, as we are dealing with classification, not regression. Instead, we use CE(Cross-Entropy Loss), which I will post after on.
'ML' 카테고리의 다른 글
MSE (0) | 2023.04.23 |
---|---|
Classification vs. Regression (0) | 2023.04.23 |
Learning Rate (0) | 2023.04.23 |
Feature Scaling (0) | 2023.04.23 |
Polynomial Regression (0) | 2023.04.23 |