Show simple item record

dc.contributor.editorAnya Belz, Shubham Agarwal, Yvette Graham, Ehud Reiter, Anastasia Shimorinaen
dc.contributor.editorGraham, Yvetteen
dc.date.accessioned2022-03-09T09:26:29Z
dc.date.available2022-03-09T09:26:29Z
dc.date.created19/4/21en
dc.date.issued2021
dc.date.submitted2021en
dc.identifier.citationBelz, Anya, Agarwal, Shubham, Graham, Yvette, Reiter, Ehud, Shimorina, Anastasia, Human Evaluation of NLP Systems Workshop, Proceedings of the Human Evaluation of NLP Systems Workshop, Online, 19/4/21, Association for Computational Linguistics, 2021en
dc.identifier.isbn978-1-954085-10-7
dc.identifier.otherN
dc.description.abstractHuman evaluation plays an important role in NLP, from the large-scale crowd-sourced evaluations to the much smaller experiments routinely encountered in conference papers. With this workshop we wish to create a forum for current human evaluation research, a space for researchers working with human evaluations to exchange ideas and begin to address the issues that human evaluation in NLP currently faces, including aspects of experimental design, reporting standards, meta-evaluation and reproducibility.en
dc.language.isoenen
dc.publisherAssociation for Computational Linguisticsen
dc.rightsYen
dc.titleHuman Evaluation of NLP Systems Workshopen
dc.title.alternativeProceedings of the Human Evaluation of NLP Systems Workshopen
dc.typeProceedings of a Conferenceen
dc.type.supercollectionscholarly_publicationsen
dc.identifier.peoplefinderurlhttp://people.tcd.ie/ygraham
dc.identifier.rssinternalid239137
dc.rights.ecaccessrightsopenAccess
dc.identifier.orcid_id0000-0001-6741-4855
dc.identifier.urihttp://hdl.handle.net/2262/98275


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record