III. LEARNING CLASSIFIER
SVM was first proposed by [20] and has been
successfully applied in many classification issues, which
can produce better results than other algorithms, suppose
we have a dataset {xi,yi}, i=1,….n, where ,is the input feature vector, yi is the
classification category. SVM finds a hyper plane making
category -1 of yi fall into the range of f(x) 0. The linear equation of
f(x) can be expressed as the following
where w is the normal vector of this hyper plane, x is the
input feature vector and b is the distance from origin
perpendicular to the hyper plane.
SVM is basically a two class classifier, but its
formulation can be change to allow multiclass
classification more commonly, the dataset is divided into
two parts “intelligently” in different ways and a separate
SVM is trained for each ways of division. Multiclass
classification is done by combining the output of all the
SVM.
In this work, images in TUD-Training dataset are used
for training phase. The dataset contains 400 training
images and there is no overlap in each image. An example
of two training image have been shown in Figure 2. Each
person in training image has been split into 4 parts. These
four parts are full body, torso, left foot and right foot.
HOG Feature extraction is performed for each component,
separately. Results are given as positive examples to the
LIBSVM classifier. Negative samples are randomly taken
from the non-human images in first stage of learning.
LIBSVM is used to train 4 body parts. Using SVM that
has Gaussian kernel or polynomial kernel is led to
improve results. However, the computational cost and
memory requirements increase.
Figure 2 Example of training image
Since the outputs of first SVM Classifier have
very high false positive for all 4 classes, a second
classifier is applied to all training images for stabilization
of detection and reduce the false positive. Then, the
results are compared to training images. If the similarity
between a detected part and real known parts falls below a
given threshold “which is gained experimentally”, that
part is considered as a false positive. These false positive
are considered as negative examples for next stage of
learning.
For the second stage SVM training, we have considered
the following solution (figure3).
A. Second SVM with different negative example for
each class.
In this method, false positive of each class is obtained
separately. Then for each part (full body, torso, left foot,
and right foot) a SVM with two classes is trained. The
negative instances for each class are false positive
associated with that class and positive instance are
positive image of that part. In this method four SVM with
two classes are use for reducing false positive of four
parts. Each SVM are applied on the output of first stage
SVM associated with its class.
Results shown the high precision of this method
compared to the previous art work.