I am sorry. I will try to be more clear now.
I have binominal classification problem and what I am interested in is to maximize positive predictive value (PPV) . Therefore lets say I got these confusion matrices:
accuracy: 74.84%PPV: 57.46 %
This is quite good as the PPV is of 57.46 %
Lat have a look at this example:
accuracy: 74.25%PPV: 100.0 %
Here the PPV is 100 % (i.e. the perfect solution from the point of PPV view and the first is considered to be better)
Unfortunately - one sample positively classified is only of a little significance. There is high probability that when deployed on validation data the results will be very bad.
Results on a validation example set is 1) PPV: 55.9% 2) PPV: 0.0 % (two misclassified samples).
And here is my question - is there any solution how to objectively compare these two results?
Thanks in advance.