Skip to main content

Table 4 Results of the evaluation of Algorithm 2 over the Steve.Museum dataset

From: Efficient semi-automated assessment of annotations trustworthiness

# Tags per

% Training

    

Time

reputation

set covered

Accuracy

Precision

recall

F-measure

(sec.)

clustered results (cut = 0.3)

5

18%

0.71

0.80

0.84

0.82

707

10

27%

0.70

0.79

0.83

0.81

1004

15

33%

0.70

0.79

0.84

0.82

1197

20

39%

0.70

0.79

0.84

0.82

1286

25

43%

0.71

0.79

0.85

0.82

3080

30

47%

0.72

0.79

0.86

0.82

3660

  1. Results of the evaluation of Algorithm 2 over the Steve.Museum dataset for training sets formed by aggregating 5, 10, 15, 20, 25 and 30 reputations per user. We report the percentage of dataset actually covered by the training set, the accuracy, the precision, the recall and the F-measure of our prediction.