Evaluation



Submissions will be managed through CodaLab competition.



Schemes


There will be three evaluation scenarios:

  1. Only plain text is given (Subtasks A, B, C)

  2. Plain text with manually annotated keyphrase boundaries are given (Subtasks B, C)

  3. Plain text with manually annotated keyphrases and their types are given (Subtask C)



Metrics


The output of systems is matched exactly against the gold standard. The traditionally used metrics of precision, recall and F1-score are computed and the micro-average of those metrics across publications of the three genres are calculated. These metrics are calculated for Subtasks A, B and C.



Additional Resources


Participants may use additional external resources, as long as they declare this at submission time. However, participants may not manually annotate the test data.