978 resultados para Information structures


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In everyday economic interactions, it is not clear whether sequential choices are visible or not to other participants: agents might be deluded about opponents'capacity to acquire,interpret or keep track of data, or might simply unexpectedly forget what they previously observed (but not chose). Following this idea, this paper drops the assumption that the information structure of extensive-form games is commonly known; that is, it introduces uncertainty into players' capacity to observe each others' past choices. Using this approach, our main result provides the following epistemic characterisation: if players (i) are rational,(ii) have strong belief in both opponents' rationality and opponents' capacity to observe others' choices, and (iii) have common belief in both opponents' future rationality and op-ponents' future capacity to observe others' choices, then the backward induction outcome obtains. Consequently, we do not require perfect information, and players observing each others' choices is often irrelevant from a strategic point of view. The analysis extends {from generic games with perfect information to games with not necessarily perfect information{the work by Battigalli and Siniscalchi (2002) and Perea (2014), who provide different sufficient epistemic conditions for the backward induction outcome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"UIUCDCS-R-74-615"

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cette thèse est une collection de trois articles en économie de l'information. Le premier chapitre sert d'introduction et les Chapitres 2 à 4 constituent le coeur de l'ouvrage. Le Chapitre 2 porte sur l’acquisition d’information sur l’Internet par le biais d'avis de consommateurs. En particulier, je détermine si les avis laissés par les acheteurs peuvent tout de même transmettre de l’information à d’autres consommateurs, lorsqu’il est connu que les vendeurs peuvent publier de faux avis à propos de leurs produits. Afin de comprendre si cette manipulation des avis est problématique, je démontre que la plateforme sur laquelle les avis sont publiés (e.g. TripAdvisor, Yelp) est un tiers important à considérer, autant que les vendeurs tentant de falsifier les avis. En effet, le design adopté par la plateforme a un effet indirect sur le niveau de manipulation des vendeurs. En particulier, je démontre que la plateforme, en cachant une partie du contenu qu'elle détient sur les avis, peut parfois améliorer la qualité de l'information obtenue par les consommateurs. Finalement, le design qui est choisi par la plateforme peut être lié à la façon dont elle génère ses revenus. Je montre qu'une plateforme générant des revenus par le biais de commissions sur les ventes peut être plus tolérante à la manipulation qu'une plateforme qui génère des revenus par le biais de publicité. Le Chapitre 3 est écrit en collaboration avec Marc Santugini. Dans ce chapitre, nous étudions les effets de la discrimination par les prix au troisième degré en présence de consommateurs non informés qui apprennent sur la qualité d'un produit par le biais de son prix. Dans un environnement stochastique avec deux segments de marché, nous démontrons que la discrimination par les prix peut nuire à la firme et être bénéfique pour les consommateurs. D'un côté, la discrimination par les prix diminue l'incertitude à laquelle font face les consommateurs, c.-à-d., la variance des croyances postérieures est plus faible avec discrimination qu'avec un prix uniforme. En effet, le fait d'observer deux prix (avec discrimination) procure plus d'information aux consommateurs, et ce, même si individuellement chacun de ces prix est moins informatif que le prix uniforme. De l'autre côté, il n'est pas toujours optimal pour la firme de faire de la discrimination par les prix puisque la présence de consommateurs non informés lui donne une incitation à s'engager dans du signaling. Si l'avantage procuré par la flexibilité de fixer deux prix différents est contrebalancé par le coût du signaling avec deux prix différents, alors il est optimal pour la firme de fixer un prix uniforme sur le marché. Finalement, le Chapitre 4 est écrit en collaboration avec Sidartha Gordon. Dans ce chapitre, nous étudions une classe de jeux où les joueurs sont contraints dans le nombre de sources d'information qu'ils peuvent choisir pour apprendre sur un paramètre du jeu, mais où ils ont une certaine liberté quant au degré de dépendance de leurs signaux, avant de prendre une action. En introduisant un nouvel ordre de dépendance entre signaux, nous démontrons qu'un joueur préfère de l'information qui est la plus dépendante possible de l'information obtenue par les joueurs pour qui les actions sont soit, compléments stratégiques et isotoniques, soit substituts stratégiques et anti-toniques, avec la sienne. De même, un joueur préfère de l'information qui est la moins dépendante possible de l'information obtenue par les joueurs pour qui les actions sont soit, substituts stratégiques et isotoniques, soit compléments stratégiques et anti-toniques, avec la sienne. Nous établissons également des conditions suffisantes pour qu'une structure d'information donnée, information publique ou privée par exemple, soit possible à l'équilibre.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Because of limited sensor and communication ranges, designing efficient mechanisms for cooperative tasks is difficult. In this article, several negotiation schemes for multiple agents performing a cooperative task are presented. The negotiation schemes provide suboptimal solutions, but have attractive features of fast decision-making, and scalability to large number of agents without increasing the complexity of the algorithm. A software agent architecture of the decision-making process is also presented. The effect of the magnitude of information flow during the negotiation process is studied by using different models of the negotiation scheme. The performance of the various negotiation schemes, using different information structures, is studied based on the uncertainty reduction achieved for a specified number of search steps. The negotiation schemes perform comparable to that of optimal strategy in terms of uncertainty reduction and also require very low computational time, similar to 7 per cent to that of optimal strategy. Finally, analysis on computational and communication requirement for the negotiation schemes is carried out.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present self assessment schemes (SAS) for multiple agents performing a search mission on an unknown terrain. The agents are subjected to limited communication and sensor ranges. The agents communicate and coordinate with their neighbours to arrive at route decisions. The self assessment schemes proposed here have very low communication and computational overhead. The SAS also has attractive features like scalability to large number of agents and fast decision-making capability. SAS can be used with partial or complete information sharing schemes during the search mission. We validate the performance of SAS using simulation on a large search space consisting of 100 agents with different information structures and self assessment schemes. We also compare the results obtained using SAS with that of a previously proposed negotiation scheme. The simulation results show that the SAS is scalable to large number of agents and can perform as good as the negotiation schemes with reduced communication requirement (almost 20% of that required for negotiation).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses a search problem with multiple limited capability search agents in a partially connected dynamical networked environment under different information structures. A self assessment-based decision-making scheme for multiple agents is proposed that uses a modified negotiation scheme with low communication overheads. The scheme has attractive features of fast decision-making and scalability to large number of agents without increasing the complexity of the algorithm. Two models of the self assessment schemes are developed to study the effect of increase in information exchange during decision-making. Some analytical results on the maximum number of self assessment cycles, effect of increasing communication range, completeness of the algorithm, lower bound and upper bound on the search time are also obtained. The performance of the various self assessment schemes in terms of total uncertainty reduction in the search region, using different information structures is studied. It is shown that the communication requirement for self assessment scheme is almost half of the negotiation schemes and its performance is close to the optimal solution. Comparisons with different sequential search schemes are also carried out. Note to Practitioners-In the futuristic military and civilian applications such as search and rescue, surveillance, patrol, oil spill, etc., a swarm of UAVs can be deployed to carry out the mission for information collection. These UAVs have limited sensor and communication ranges. In order to enhance the performance of the mission and to complete the mission quickly, cooperation between UAVs is important. Designing cooperative search strategies for multiple UAVs with these constraints is a difficult task. Apart from this, another requirement in the hostile territory is to minimize communication while making decisions. This adds further complexity to the decision-making algorithms. In this paper, a self-assessment-based decision-making scheme, for multiple UAVs performing a search mission, is proposed. The agents make their decisions based on the information acquired through their sensors and by cooperation with neighbors. The complexity of the decision-making scheme is very low. It can arrive at decisions fast with low communication overheads, while accommodating various information structures used for increasing the fidelity of the uncertainty maps. Theoretical results proving completeness of the algorithm and the lower and upper bounds on the search time are also provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A classical argument of de Finetti holds that Rationality implies Subjective Expected Utility (SEU). In contrast, the Knightian distinction between Risk and Ambiguity suggests that a rational decision maker would obey the SEU paradigm when the information available is in some sense good, and would depart from it when the information available is not good. Unlike de Finetti's, however, this view does not rely on a formal argument. In this paper, we study the set of all information structures that might be availabe to a decision maker, and show that they are of two types: those compatible with SEU theory and those for which SEU theory must fail. We also show that the former correspond to "good" information, while the latter correspond to information that is not good. Thus, our results provide a formalization of the distinction between Risk and Ambiguity. As a consequence of our main theorem (Theorem 2, Section 8), behavior not-conforming to SEU theory is bound to emerge in the presence of Ambiguity. We give two examples of situations of Ambiguity. One concerns the uncertainty on the class of measure zero events, the other is a variation on Ellberg's three-color urn experiment. We also briefly link our results to two other strands of literature: the study of ambiguous events and the problem of unforeseen contingencies. We conclude the paper by re-considering de Finetti's argument in light of our findings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The concept of Ambiguity designates those situations where the information available to the decision maker is insufficient to form a probabilistic view of the world. Thus, it has provided the motivation for departing from the Subjective Expected Utility (SEU) paradigm. Yet, the formalization of the concept is missing. This is a grave omission as it leaves non-expected utility models hanging on a shaky ground. In particular, it leaves unanswered basic questions such as: (1) Does Ambiguity exist?; (2) If so, which situations should be labeled as "ambiguous"?; (3) Why should one depart from Subjective Expected Utility (SEU) in the presence of Ambiguity?; and (4) If so, what kind of behavior should emerge in the presence of Ambiguity? The present paper fills these gaps. Specifically, it identifies those information structures that are incompatible with SEU theory, and shows that their mathematical properties are the formal counterpart of the intuitive idea of insufficient information. These are used to give a formal definition of Ambiguity and, consequently, to distinguish between ambiguous and unambiguous situations. Finally, the paper shows that behavior not conforming to SEU theory must emerge in correspondence of insufficient information and identifies the class of non-EU models that emerge in the face of Ambiguity. The paper also proposes a new comparative definition of Ambiguity, and discusses its relation with some of the existing literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A body of knowledge in Software Engineering requires experiments replications. The knowledge generated by a study is registered in the so-called lab package, which, must be reviewed by an eventual research group with the intention to replicate it. However, researchers face difficulties reviewing the lab package, what leads to problems in share knowledge among research groups. Besides that, the lack of standardization is an obstacle to the integration of the knowledge from an isolated study in a common body of knowledge. In this sense, ontologies can be applied, since they can be seen as a standard that promotes the shared understanding of the experiment information structure. In this paper, we present a workflow to generate lab packages based on EXPEiiQntology, an ontology of controlled experiments domain. In addition, by means of lab packages instantiation, it is possible to evolve the ontology, in order to deal with new concepts that may appear in different lab packages. The iterative ontology evolution aims at achieve a standard that is able to accommodate different lab packages and, hence, facilitate to review and understand their content.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The design and development of spoken interaction systems has been a thoroughly studied research scope for the last decades. The aim is to obtain systems with the ability to interact with human agents with a high degree of naturalness and efficiency, allowing them to carry out the actions they desire using speech, as it is the most natural means of communication between humans. To achieve that degree of naturalness, it is not enough to endow systems with the ability to accurately understand the user’s utterances and to properly react to them, even considering the information provided by the user in his or her previous interactions. The system has also to be aware of the evolution of the conditions under which the interaction takes place, in order to act the most coherent way as possible at each moment. Consequently, one of the most important features of the system is that it has to be context-aware. This context awareness of the system can be reflected in the modification of the behaviour of the system taking into account the current situation of the interaction. For instance, the system should decide which action it has to carry out, or the way to perform it, depending on the user that requests it, on the way that the user addresses the system, on the characteristics of the environment in which the interaction takes place, and so on. In other words, the system has to adapt its behaviour to these evolving elements of the interaction. Moreover that adaptation has to be carried out, if possible, in such a way that the user: i) does not perceive that the system has to make any additional effort, or to devote interaction time to perform tasks other than carrying out the requested actions, and ii) does not have to provide the system with any additional information to carry out the adaptation, which could imply a lesser efficiency of the interaction, since users should devote several interactions only to allow the system to become adapted. In the state-of-the-art spoken dialogue systems, researchers have proposed several disparate strategies to adapt the elements of the system to different conditions of the interaction (such as the acoustic characteristics of a specific user’s speech, the actions previously requested, and so on). Nevertheless, to our knowledge there is not any consensus on the procedures to carry out these adaptation. The approaches are to an extent unrelated from one another, in the sense that each one considers different pieces of information, and the treatment of that information is different taking into account the adaptation carried out. In this regard, the main contributions of this Thesis are the following ones: Definition of a contextualization framework. We propose a unified approach that can cover any strategy to adapt the behaviour of a dialogue system to the conditions of the interaction (i.e. the context). In our theoretical definition of the contextualization framework we consider the system’s context as all the sources of variability present at any time of the interaction, either those ones related to the environment in which the interaction takes place, or to the human agent that addresses the system at each moment. Our proposal relies on three aspects that any contextualization approach should fulfill: plasticity (i.e. the system has to be able to modify its behaviour in the most proactive way taking into account the conditions under which the interaction takes place), adaptivity (i.e. the system has also to be able to consider the most appropriate sources of information at each moment, both environmental and user- and dialogue-dependent, to effectively adapt to the conditions aforementioned), and transparency (i.e. the system has to carry out the contextualizaton-related tasks in such a way that the user neither perceives them nor has to do any effort in providing the system with any information that it needs to perform that contextualization). Additionally, we could include a generality aspect to our proposed framework: the main features of the framework should be easy to adopt in any dialogue system, regardless of the solution proposed to manage the dialogue. Once we define the theoretical basis of our contextualization framework, we propose two cases of study on its application in a spoken dialogue system. We focus on two aspects of the interaction: the contextualization of the speech recognition models, and the incorporation of user-specific information into the dialogue flow. One of the modules of a dialogue system that is more prone to be contextualized is the speech recognition system. This module makes use of several models to emit a recognition hypothesis from the user’s speech signal. Generally speaking, a recognition system considers two types of models: an acoustic one (that models each of the phonemes that the recognition system has to consider) and a linguistic one (that models the sequences of words that make sense for the system). In this work we contextualize the language model of the recognition system in such a way that it takes into account the information provided by the user in both his or her current utterance and in the previous ones. These utterances convey information useful to help the system in the recognition of the next utterance. The contextualization approach that we propose consists of a dynamic adaptation of the language model that is used by the recognition system. We carry out this adaptation by means of a linear interpolation between several models. Instead of training the best interpolation weights, we make them dependent on the conditions of the dialogue. In our approach, the system itself will obtain these weights as a function of the reliability of the different elements of information available, such as the semantic concepts extracted from the user’s utterance, the actions that he or she wants to carry out, the information provided in the previous interactions, and so on. One of the aspects more frequently addressed in Human-Computer Interaction research is the inclusion of user specific characteristics into the information structures managed by the system. The idea is to take into account the features that make each user different from the others in order to offer to each particular user different services (or the same service, but in a different way). We could consider this approach as a user-dependent contextualization of the system. In our work we propose the definition of a user model that contains all the information of each user that could be potentially useful to the system at a given moment of the interaction. In particular we will analyze the actions that each user carries out throughout his or her interaction. The objective is to determine which of these actions become the preferences of that user. We represent the specific information of each user as a feature vector. Each of the characteristics that the system will take into account has a confidence score associated. With these elements, we propose a probabilistic definition of a user preference, as the action whose likelihood of being addressed by the user is greater than the one for the rest of actions. To include the user dependent information into the dialogue flow, we modify the information structures on which the dialogue manager relies to retrieve information that could be needed to solve the actions addressed by the user. Usage preferences become another source of contextual information that will be considered by the system towards a more efficient interaction (since the new information source will help to decrease the need of the system to ask users for additional information, thus reducing the number of turns needed to carry out a specific action). To test the benefits of the contextualization framework that we propose, we carry out an evaluation of the two strategies aforementioned. We gather several performance metrics, both objective and subjective, that allow us to compare the improvements of a contextualized system against the baseline one. We will also gather the user’s opinions as regards their perceptions on the behaviour of the system, and its degree of adaptation to the specific features of each interaction. Resumen El diseño y el desarrollo de sistemas de interacción hablada ha sido objeto de profundo estudio durante las pasadas décadas. El propósito es la consecución de sistemas con la capacidad de interactuar con agentes humanos con un alto grado de eficiencia y naturalidad. De esta manera, los usuarios pueden desempeñar las tareas que deseen empleando la voz, que es el medio de comunicación más natural para los humanos. A fin de alcanzar el grado de naturalidad deseado, no basta con dotar a los sistemas de la abilidad de comprender las intervenciones de los usuarios y reaccionar a ellas de manera apropiada (teniendo en consideración, incluso, la información proporcionada en previas interacciones). Adicionalmente, el sistema ha de ser consciente de las condiciones bajo las cuales transcurre la interacción, así como de la evolución de las mismas, de tal manera que pueda actuar de la manera más coherente en cada instante de la interacción. En consecuencia, una de las características primordiales del sistema es que debe ser sensible al contexto. Esta capacidad del sistema de conocer y emplear el contexto de la interacción puede verse reflejada en la modificación de su comportamiento debida a las características actuales de la interacción. Por ejemplo, el sistema debería decidir cuál es la acción más apropiada, o la mejor manera de llevarla a término, dependiendo del usuario que la solicita, del modo en el que lo hace, etcétera. En otras palabras, el sistema ha de adaptar su comportamiento a tales elementos mutables (o dinámicos) de la interacción. Dos características adicionales son requeridas a dicha adaptación: i) el usuario no ha de percibir que el sistema dedica recursos (temporales o computacionales) a realizar tareas distintas a las que aquél le solicita, y ii) el usuario no ha de dedicar esfuerzo alguno a proporcionar al sistema información adicional para llevar a cabo la interacción. Esto último implicaría una menor eficiencia de la interacción, puesto que los usuarios deberían dedicar parte de la misma a proporcionar información al sistema para su adaptación, sin ningún beneficio inmediato. En los sistemas de diálogo hablado propuestos en la literatura, se han propuesto diferentes estrategias para llevar a cabo la adaptación de los elementos del sistema a las diferentes condiciones de la interacción (tales como las características acústicas del habla de un usuario particular, o a las acciones a las que se ha referido con anterioridad). Sin embargo, no existe una estrategia fija para proceder a dicha adaptación, sino que las mismas no suelen guardar una relación entre sí. En este sentido, cada una de ellas tiene en cuenta distintas fuentes de información, la cual es tratada de manera diferente en función de las características de la adaptación buscada. Teniendo en cuenta lo anterior, las contribuciones principales de esta Tesis son las siguientes: Definición de un marco de contextualización. Proponemos un criterio unificador que pueda cubrir cualquier estrategia de adaptación del comportamiento de un sistema de diálogo a las condiciones de la interacción (esto es, el contexto de la misma). En nuestra definición teórica del marco de contextualización consideramos el contexto del sistema como todas aquellas fuentes de variabilidad presentes en cualquier instante de la interacción, ya estén relacionadas con el entorno en el que tiene lugar la interacción, ya dependan del agente humano que se dirige al sistema en cada momento. Nuestra propuesta se basa en tres aspectos que cualquier estrategia de contextualización debería cumplir: plasticidad (es decir, el sistema ha de ser capaz de modificar su comportamiento de la manera más proactiva posible, teniendo en cuenta las condiciones en las que tiene lugar la interacción), adaptabilidad (esto es, el sistema ha de ser capaz de considerar la información oportuna en cada instante, ya dependa del entorno o del usuario, de tal manera que adecúe su comportamiento de manera eficaz a las condiciones mencionadas), y transparencia (que implica que el sistema ha de desarrollar las tareas relacionadas con la contextualización de tal manera que el usuario no perciba la manera en que dichas tareas se llevan a cabo, ni tampoco deba proporcionar al sistema con información adicional alguna). De manera adicional, incluiremos en el marco propuesto el aspecto de la generalidad: las características del marco de contextualización han de ser portables a cualquier sistema de diálogo, con independencia de la solución propuesta en los mismos para gestionar el diálogo. Una vez hemos definido las características de alto nivel de nuestro marco de contextualización, proponemos dos estrategias de aplicación del mismo a un sistema de diálogo hablado. Nos centraremos en dos aspectos de la interacción a adaptar: los modelos empleados en el reconocimiento de habla, y la incorporación de información específica de cada usuario en el flujo de diálogo. Uno de los módulos de un sistema de diálogo más susceptible de ser contextualizado es el sistema de reconocimiento de habla. Este módulo hace uso de varios modelos para generar una hipótesis de reconocimiento a partir de la señal de habla. En general, un sistema de reconocimiento emplea dos tipos de modelos: uno acústico (que modela cada uno de los fonemas considerados por el reconocedor) y uno lingüístico (que modela las secuencias de palabras que tienen sentido desde el punto de vista de la interacción). En este trabajo contextualizamos el modelo lingüístico del reconocedor de habla, de tal manera que tenga en cuenta la información proporcionada por el usuario, tanto en su intervención actual como en las previas. Estas intervenciones contienen información (semántica y/o discursiva) que puede contribuir a un mejor reconocimiento de las subsiguientes intervenciones del usuario. La estrategia de contextualización propuesta consiste en una adaptación dinámica del modelo de lenguaje empleado en el reconocedor de habla. Dicha adaptación se lleva a cabo mediante una interpolación lineal entre diferentes modelos. En lugar de entrenar los mejores pesos de interpolación, proponemos hacer los mismos dependientes de las condiciones actuales de cada diálogo. El propio sistema obtendrá estos pesos como función de la disponibilidad y relevancia de las diferentes fuentes de información disponibles, tales como los conceptos semánticos extraídos a partir de la intervención del usuario, o las acciones que el mismo desea ejecutar. Uno de los aspectos más comúnmente analizados en la investigación de la Interacción Persona-Máquina es la inclusión de las características específicas de cada usuario en las estructuras de información empleadas por el sistema. El objetivo es tener en cuenta los aspectos que diferencian a cada usuario, de tal manera que el sistema pueda ofrecer a cada uno de ellos el servicio más apropiado (o un mismo servicio, pero de la manera más adecuada a cada usuario). Podemos considerar esta estrategia como una contextualización dependiente del usuario. En este trabajo proponemos la definición de un modelo de usuario que contenga toda la información relativa a cada usuario, que pueda ser potencialmente utilizada por el sistema en un momento determinado de la interacción. En particular, analizaremos aquellas acciones que cada usuario decide ejecutar a lo largo de sus diálogos con el sistema. Nuestro objetivo es determinar cuáles de dichas acciones se convierten en las preferencias de cada usuario. La información de cada usuario quedará representada mediante un vector de características, cada una de las cuales tendrá asociado un valor de confianza. Con ambos elementos proponemos una definición probabilística de una preferencia de uso, como aquella acción cuya verosimilitud es mayor que la del resto de acciones solicitadas por el usuario. A fin de incluir la información dependiente de usuario en el flujo de diálogo, llevamos a cabo una modificación de las estructuras de información en las que se apoya el gestor de diálogo para recuperar información necesaria para resolver ciertos diálogos. En dicha modificación las preferencias de cada usuario pasarán a ser una fuente adicional de información contextual, que será tenida en cuenta por el sistema en aras de una interacción más eficiente (puesto que la nueva fuente de información contribuirá a reducir la necesidad del sistema de solicitar al usuario información adicional, dando lugar en consecuencia a una reducción del número de intervenciones necesarias para llevar a cabo una acción determinada). Para determinar los beneficios de las aplicaciones del marco de contextualización propuesto, llevamos a cabo una evaluación de un sistema de diálogo que incluye las estrategias mencionadas. Hemos recogido diversas métricas, tanto objetivas como subjetivas, que nos permiten determinar las mejoras aportadas por un sistema contextualizado en comparación con el sistema sin contextualizar. De igual manera, hemos recogido las opiniones de los participantes en la evaluación acerca de su percepción del comportamiento del sistema, y de su capacidad de adaptación a las condiciones concretas de cada interacción.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The value of knowing about data availability and system accessibility is analyzed through theoretical models of Information Economics. When a user places an inquiry for information, it is important for the user to learn whether the system is not accessible or the data is not available, rather than not have any response. In reality, various outcomes can be provided by the system: nothing will be displayed to the user (e.g., a traffic light that does not operate, a browser that keeps browsing, a telephone that does not answer); a random noise will be displayed (e.g., a traffic light that displays random signals, a browser that provides disorderly results, an automatic voice message that does not clarify the situation); a special signal indicating that the system is not operating (e.g., a blinking amber indicating that the traffic light is down, a browser responding that the site is unavailable, a voice message regretting to tell that the service is not available). This article develops a model to assess the value of the information for the user in such situations by employing the information structure model prevailing in Information Economics. Examples related to data accessibility in centralized and in distributed systems are provided for illustration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this concluding chapter, we bring together the threads and reflections on the chapters contained in this text and show how they relate to multi-level issues. The book has focused on the world of Human Resource Management (HRM) and the systems and practices it must put in place to foster innovation. Many of the contributions argue that in order to bring innovation about, organisations have to think carefully about the way in which they will integrate what is, in practice, organisationally relevant — but socially distributed — knowledge. They need to build a series of knowledge-intensive activities and networks, both within their own boundaries and across other important external inter-relationships. In so doing, they help to co-ordinate important information structures. They have, in effect, to find ways of enabling people to collaborate with each other at lower cost, by reducing both the costs of their co-ordination and the levels of unproductive search activity. They have to engineer these behaviours by reducing the risks for people that might be associated with incorrect ideas and help individuals, teams and business units to advance incomplete ideas that are so often difficult to codify. In short, a range of intangible assets must flow more rapidly throughout the organisation and an appropriate balance must be found between the rewards and incentives associated with creativity, novelty and innovation, versus the risks that innovation may also bring.