Today, I fitted in the linear classifier code into the cascade architecture. The weak classifier that gets boosted in the nodes is a Perceptron instead of a Decision Stump. The source code can be found in the following directory.
/usr/sci/crcnsdata/CRCNS/Synapses/Code/Matlab/ML_Boosting4_Lin1/
The results of the experiments run on various datasets are shown below.
10D Gaussian dataset: The classifier was able to achieve 100% accuracy (with 0.99 true positive target) with linear classifier.
Brodatz dataset: The classifier was able to achieve 0.9 true positive target in few of the nodes. Here we observe that even though the training examples are the same for nodes 1 & 2 and nodes 6 & 7, they have different performance because the perceptron weight initialization is random in different times.
Synapses dataset: This experiment failed several times because the weak classifier failed to achieve even 50% accuracy. I think this because of the randomization in the weight vectors and the data points used for training. I trained one node once and failed to construct the consecutive nodes. Hence there is no graph for it.
The predicted Y values of every node classifier was giving just 5 or 6 unique values. It has to be more continuous since the sample set is really large. So in the next experiment the moment values where removed from the dataset and the experiment was conducted with a very low true positive rate of 0.7 and the following graphs shows the result of such an approach.
The next step will be verify the moments again and normalize the values. Before starting to work on any other attribute.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment