Monday, October 6, 2008

ROC Curve

On visual inspection of the results it actually looked better than the Confusion matrix values. The reason was that there were multiple points SIFT points near the expert markup. Of those multiple points only few were detected (like 2 of 10). That is the reason why the true positive was that low. So instead of tracking the True positive as the TP of the kNN classifier we give the following definition as described below.

#TP = number of ground truth positives (synapses marked by the Mark lab) with at least one marking done by the classifier within some radius (10 pixels).
#GTP = number of ground truth points (synapses marked by the Mark lab)
=> TP_rate = #TP / #GTP
#FP = number of positions marked by the classifier - #TP.
=> FP_rate = #FP / #{markings by classifier}This was the definition for the true positive rate and the false negative rate. Using this definition the ROC curve was done for different value of k (1,3,5,..51) in KNN classifier and different weights for the positive ones(1.0, 1.1,1.2...2.0). The figure below is one such example.The weird looking graph plotted for all ks :)

3 comments:

Antonio said...

ROC is all caps...

kannanuv said...

Antonio: Thanks for pointing out. Done.

kannanuv said...

False positive has to be calculated as (#FP/(#SIFT - #GTP))