2025. 8. 4. 11:25ㆍpython/ML
Before this chapter, we discussed Bayesian learning.
The Bayesian formula is as follows:
P(y=c∣x)=P(x∣y=c)⋅P(y=c)
In this formula:
- P(x∣y=c) is called the likelihood
- P(y=c) is called the prior
Let’s dive into an example to better understand this concept.
Suppose we want to predict the class label for a given input feature vector.
For example, imagine the input features are:
x=[0.5, 0.3, 0.7, 0.8]
When this feature vector is passed into the model, how does the model decide which class it belongs to?
This is our goal: to classify the input xx.
According to Bayesian inference, the classification is based on the product of the likelihood and the prior:
P(y=c∣x,θ)∝P(x∣y=c,θ)⋅P(y=c∣θ)
In this formula, θ represents the parameters of the model.
But what exactly does θ mean?
It depends on the specific model you’re using.
- In Naive Bayes, for example, θ\theta includes things like the mean and variance of each feature per class (if you're using Gaussian Naive Bayes), or word probabilities (if you're using Multinomial Naive Bayes).
- In general, θ\theta refers to any parameters needed to compute the likelihood and the prior.
So to summarize:
- You compute the likelihood: P(x∣y=c,θ)
- You multiply it by the prior: P(y=c∣θ)
- Then you compare the results across different classes cc, and choose the class with the highest value
This is how Bayesian classification works.
'python > ML' 카테고리의 다른 글
| [ML] MNIST_Hand Digit with CrossEntropy and Matrix for (0) | 2025.08.12 |
|---|---|
| [ML] MNIST_Hand written code (0) | 2025.08.09 |
| [Probability] Bayes Rule (0) | 2025.08.03 |
| [Linear_algebra] Null space (0) | 2025.08.01 |
| [ML_7] Classification by using DecisionTreeClassifier(+) (0) | 2025.07.18 |