Skip to main content

Table 2 Results of the evaluation of Algorithm 1 over the Steve.Museum dataset

From: Efficient semi-automated assessment of annotations trustworthiness

# Tags per % Training      Time
reputation set covered Accuracy Precision Recall F-measure (sec.)
5 18% 0.68 0.79 0.80 0.80 1254
10 27% 0.70 0.79 0.83 0.81 1957
15 33% 0.71 0.80 0.84 0.82 2659
20 39% 0.70 0.79 0.84 0.81 2986
25 43% 0.71 0.79 0.85 0.82 3350
30 47% 0.72 0.81 0.85 0.83 7598
  1. Results of the evaluation of Algorithm 1 over the Steve.Museum dataset for training sets formed by aggregating 5, 10, 15, 20, 25 and 30 reputations per user. We report the percentage of dataset actually covered by the training set, the accuracy, the precision, the recall and the F-measure of our prediction.