|Title||Practical reasoning using values: an argumentative approach based on a hierarchy of values|
|Publication Type||Journal Article|
|Year of Publication||2019|
|Authors||Teze JC, Perello-Moragues A, Godo L, Noriega P|
|Journal||Annals of Mathematics and Artificial Intelligence|
Values are at the heart of human decision-making. They are used to decide whether something or some state of affairs is good or not, and they are also used to address the moral dilemma of the right thing to do under given circumstances. Both uses are present in several everyday situations, from the design of a public policy to the negotiation of employee benefit packages. Both uses of values are specially relevant when one intends to design or validate that artificial intelligent systems behave in a morally correct way. In real life, the choice of policy components or the agreed upon benefit package are processes that involve argumentation. Likewise, the design and deployment of value-driven artificial entities may be well served by embedding practical reasoning capabilities in these entities or using argumentation for their design and certification processes. In this paper, we propose a formal framework to support the choice of actions of a value-driven agent and arrange them into plans that reflect the agent's preferences. The framework is based on defeasible argumentation. It presumes that agent values are partially ordered in a hierarchy that is used to resolve conflicts between incommensurable values.
- About IIIA
- Current news