python(17)
-
[ML_8] Ensemble : Voting , Bagging , Boosting(+)
Ensemble Method DefinitionAn ensemble method is a machine learning approach that combines multiple weak learners (individual models) to produce a stronger and more accurate predictive model.The key idea is that by aggregating the predictions of several models, the ensemble can reduce variance, bias, or improve generalization compared to a single model. Representative Types of Ensemble MethodsVo..
2025.08.22 -
[python] class ; single underscore, double underscore
When we are making some class, we often see the _ or __ in class.But what do they mean?Actually, _ means it is intended to be used inside the class, but it's not enforced.__ means it is also intended to be used inside the class, but with stronger protection. In this case, __hidden naming triggers Name Mangling, so the __hidden is gonna change to _ClassName__hidden. class nn: def __init__(sel..
2025.08.17 -
[ML] MNIST_Hand Digit with CrossEntropy and Matrix for
In the previous post, we trained an MNIST network with the quadratic (MSE) cost and a per-sample backprop loop.In this post, we switch to the Cross-Entropy (CE) cost and a mini-batch (vectorized) update. With sigmoid + CE (binary) or softmax + CE (multiclass), the output error isThere is no extra σ′(z) factor multiplying the error at the output layer.This avoids the severe gradient shrinkage you..
2025.08.12 -
[ML] MNIST_Hand written code
This code is based on Michael Nielsen’s implementation and runs on the MNIST handwritten digit recognition dataset.The key components of this code are the class structure, feedforward process, stochastic gradient descent (SGD), and backpropagation.The code is as follows: def sigmoid(z): return 1.0 / (1.0 + np.exp(-z)) import numpy as np import random def sigmoid_prime(z): """Derivative of..
2025.08.09 -
[ML] Bayesian Concept learning
Before this chapter, we discussed Bayesian learning.The Bayesian formula is as follows:P(y=c∣x)=P(x∣y=c)⋅P(y=c)In this formula:P(x∣y=c) is called the likelihoodP(y=c) is called the priorLet’s dive into an example to better understand this concept.Suppose we want to predict the class label for a given input feature vector.For example, imagine the input features are:x=[0.5, 0.3, 0.7, 0.8]When this..
2025.08.04 -
[Probability] Bayes Rule
According to the definition of conditional probability,P(X = x | Y = y) = p(X = x, Y = y) / p(Y = y).We can rewrite the numerator using the product rule:p(X = x, Y = y) = p(X = x) × p(Y = y | X = x).We can also rewrite the denominator using the law of total probability:p(Y = y) = Σ over x' [ p(X = x') × p(Y = y | X = x') ].For example, if the event Y = y corresponds to "ate melon",then the total..
2025.08.03