dc.contributor.author | Graham, Yvette | |
dc.date.accessioned | 2021-04-20T17:05:07Z | |
dc.date.available | 2021-04-20T17:05:07Z | |
dc.date.created | 1/7/13 | en |
dc.date.issued | 2013 | |
dc.date.submitted | 2013 | en |
dc.identifier.citation | Yvette Graham, Timothy Baldwin, Alistair Moffat, Justin Zobel, Continuous Measurement Scales in Human Evaluation of Machine Translation, Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, 7th Linguistic Annotation Workshop and Interoperability with Discourse, Sofia, Bulgaria, 1/7/13, Association for Computational Linguistics, 2013, 33 - 41 | en |
dc.identifier.other | Y | |
dc.description.abstract | We explore the use of continuous rating scales for human evaluation in the context of machine translation evaluation, comparing two assessor-intrinsic quality-control techniques that do not rely on agreement with expert judgments. Experiments employing Amazon's Mechanical Turk service show that quality-control techniques made possible by the use of the continuous scale show dramatic improvements to intra-annotator agreement of up to +0.101 in the kappa coefficient, with inter-annotator agreement increasing by up to +0.144 when additional standardization of scores is applied. | en |
dc.format.extent | 33 | en |
dc.format.extent | 41 | en |
dc.language.iso | en | en |
dc.publisher | Association for Computational Linguistics | en |
dc.rights | Y | en |
dc.title | Continuous Measurement Scales in Human Evaluation of Machine Translation | en |
dc.title.alternative | Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse | en |
dc.title.alternative | 7th Linguistic Annotation Workshop and Interoperability with Discourse | en |
dc.type | Conference Paper | en |
dc.type.supercollection | scholarly_publications | en |
dc.type.supercollection | refereed_publications | en |
dc.identifier.peoplefinderurl | http://people.tcd.ie/ygraham | |
dc.identifier.rssinternalid | 227763 | |
dc.rights.ecaccessrights | openAccess | |
dc.identifier.orcid_id | 0000-0001-6741-4855 | |
dc.identifier.uri | https://www.aclweb.org/anthology/W13-2305 | |
dc.identifier.uri | http://hdl.handle.net/2262/96112 | |