3 resultados para Review Model

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Group work allows participants to pool their thoughts and examine difficulties from several angles. In these settings, it is possible to attempt things that an individual could not achieve, combining a variety of abilities and knowledge to tackle more complicated and large-scale challenges. That’s why nowadays collaborative work is becoming more and more widespread to solve complex innovation dilemmas. Since innovation isn’t a tangible thing, most innovation teams used to take decisions based on performance KPIs such as forecasted engagement, projected profitability, investments required, cultural impacts etc. Have you ever wondered the reason why sometimes innovation group processes come out with decisions which are not the optimal meeting point of all the KPIs? Has this decision been influenced by other factors? Some researchers account part of this phenomenon to the emotions in group-based interaction between participants. I will develop a literature review that is split into three parts: first, I will consider some emotions theories from an individual perspective; secondly, a wider view of collective interactions theories will be provided; lastly, I will supply some recent collective interaction empirical studies. After the theoretical and empirical gaps have been tackled, the study will additionally move forward with a methodological point of view, about the Circumplex Model, which is the model I used to evaluate emotions in my research. This model has been applied to SUGAR project, which is the biggest design thinking academy worldwide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planning is an important sub-field of artificial intelligence (AI) focusing on letting intelligent agents deliberate on the most adequate course of action to attain their goals. Thanks to the recent boost in the number of critical domains and systems which exploit planning for their internal procedures, there is an increasing need for planning systems to become more transparent and trustworthy. Along this line, planning systems are now required to produce not only plans but also explanations about those plans, or the way they were attained. To address this issue, a new research area is emerging in the AI panorama: eXplainable AI (XAI), within which explainable planning (XAIP) is a pivotal sub-field. As a recent domain, XAIP is far from mature. No consensus has been reached in the literature about what explanations are, how they should be computed, and what they should explain in the first place. Furthermore, existing contributions are mostly theoretical, and software implementations are rarely more than preliminary. To overcome such issues, in this thesis we design an explainable planning framework bridging the gap between theoretical contributions from literature and software implementations. More precisely, taking inspiration from the state of the art, we develop a formal model for XAIP, and the software tool enabling its practical exploitation. Accordingly, the contribution of this thesis is four-folded. First, we review the state of the art of XAIP, supplying an outline of its most significant contributions from the literature. We then generalise the aforementioned contributions into a unified model for XAIP, aimed at supporting model-based contrastive explanations. Next, we design and implement an algorithm-agnostic library for XAIP based on our model. Finally, we validate our library from a technological perspective, via an extensive testing suite. Furthermore, we assess its performance and usability through a set of benchmarks and end-to-end examples.