Human Evaluation of NLP Systems Workshop

File Type:
PDFItem Type:
Proceedings of a ConferenceDate:
2021Access:
openAccessCitation:
Belz, Anya, Agarwal, Shubham, Graham, Yvette, Reiter, Ehud, Shimorina, Anastasia, Human Evaluation of NLP Systems Workshop, Proceedings of the Human Evaluation of NLP Systems Workshop, Online, 19/4/21, Association for Computational Linguistics, 2021Download Item:
Abstract:
Human evaluation plays an important role in NLP, from the large-scale crowd-sourced evaluations to
the much smaller experiments routinely encountered in conference papers. With this workshop we wish
to create a forum for current human evaluation research, a space for researchers working with human
evaluations to exchange ideas and begin to address the issues that human evaluation in NLP currently
faces, including aspects of experimental design, reporting standards, meta-evaluation and reproducibility.
Author's Homepage:
http://people.tcd.ie/ygrahamOther Titles:
Proceedings of the Human Evaluation of NLP Systems WorkshopPublisher:
Association for Computational LinguisticsType of material:
Proceedings of a ConferenceCollections
Availability:
Full text availableISBN:
978-1-954085-10-7Metadata
Show full item recordThe following license files are associated with this item: