|Títol||Personalised Automated Assessments|
|Publication Type||Conference Paper|
|Year of Publication||2015|
|Authors||Gutierrez P, Osman N, Sierra C|
|Conference Name||Proc. of the First International Workshop on AI and Feedback|
|Conference Location||Buenos Aires|
Consider an evaluator, or an assessor, who needs to assess a large amount of information. For instance, think of a tutor in a massive open online course with thousands of enrolled students, a senior program committee member in a large peer review process who needs to decide what are the final marks of reviewed papers, or a user in an e-commerce scenario where the user needs to build up its opinion about products evaluated by others. When assessing a large number of objects, sometimes it is simply unfeasible to evaluate them all and often one may need to rely on the opinions of others. In this paper we provide a model that uses peer assessments to generate expected assessments and tune them for a particular assessor. Furthermore, we are able to provide a measure of the uncertainty of our computed assessments and a ranking of the objects that should be assessed next in order to decrease the overall uncertainty of the calculated assessments.
- Quant a IIIA