Kernel Density Estimation
We want a non-parametric method to estimate a distribution given a dataset. Histogram can be seen as a simple way to do kernel density estimation. To formulate the problem more genearlly, for a given vector \(x\), drawn from a distribution \(p(x)\), will fall into a given region \(R\) with the probability of: \(P = \int_R p(x') dx'\). Let’s suppose that there are \(N\) vectors drawn from the distribution, the probability of k vectors within N vectors falls into the region is \(P(k) = {N \choose k} P^k (1-P)^(N-k)\). From the property of binomial distribution, we know that we can estimate \(P\) as \(P = \frac{k}{N}\). On the other hand, we can estimate \(P\) if the region is very small, by \(P = p(x) * V\), where \(V\) is the volume enclosed by region R. Therefore, we would have such formualt \(p(x) = \frac{k}{NV}\), where V is volume surrounding x, N is # of total samples, k is # of samples inside V.
Parzen window
To find how many samples inside V, we can define a kernel function \(K(u) = 1 if |u_j| \le \frac{1}{2}, 0 otherwise\). This kernel is known as Parzen window. The quantity \(K(\frac{x - x_n}{h})\) equals to one if \(x_n\) is inside V and 0 otherwise. Therefore, \(p_{kde}(x) = \frac{1}{Nh^D} \sum_{n=1}^N K(\frac{x - x_n}{h})\). By looking at the formula, we know that the expectation is \(E[p_{kde}(x)] = \frac{1}{Nh^D} \sum_{n=1}^N E[K(\frac{x - x_n}{h})] = \frac{1}{h^D} E[K(\frac{x - x_n}{h})] = \frac{1}{h^d} \int K(\frac{x - x_n}{h})p(x) dx\). We can see that the expecation of \(p_{kde}(x)\) is a convolution of the true density \(p(x)\) with the kernel function. Wider h is, the smoother the estimate p_{kde}(x) is.
We can measure the MSE at the esimation point x, given by
\(E[(p_{kde}(x) - p(x))^2] = E[p_{kde}(x) - p(x)]^2 + var(p_{kde}(x)) = bias^2 + variance\).
Bandwidth selection
To select the bandwidth, we may use subjective methods, plot the density function. But when we are dealing with high-dimensional data, it’s better to use \(h_{MISE} = argmin{E[\int (p_{kde}(x)- p(x))^2 dx]}\) If we assume the true distribution is Gaussian and we use a Gaussian kernel, it can be shown that the optimal value of h is \(h = 1.06 \sigma N^{\frac{-1}{5}}\). We can also use a cross-validation approach to compute the best bandwidth, which is just leave one out and maximize the pseudo-likelihood
KNN density estimation
When we fix the k and determine the smallest V that contains k samples, it gives rise to KNN. Then we would have \(p(x) = \frac{k}{N c_D R_k^D(x)}\), where \(c_D\) is the hyper-sphere volume which is given by \(c_D = \frac{2 \pi^{D/2}}{ D \Gamma(D/2)}\) and \(R_k^D(x)\) is the distance between the estimation point x and the k-th closest neighbor.
However, KNN density estimation is not so good because it is very prone to local noise and because \(R_k^D (x)\) is not differentiable, the density estimate will have discontinuities. However, it gives a very simple Bayes Classifiers, as we know, the discriminant function of Bayes Classifiers is \(p(w_i \mid x)\), according to the KNN density estimation,
\(p(w_i \mid x) = \frac{p(x \mid w_i) p(w_i)}{p(x)} = \frac{\frac{k_i}{N_i V} \frac{N_i}{N} }{\frac{k}{NV}} = \frac{k_i}{k}\). This becomes very intuitive that, given an unlabeled sample, KNN will assign it to the class that appears most frequently within the k-nearest neighbor.
comments powered by Disqus