849 resultados para time, team, task and context
Resumo:
According to the broaden-and-build theory of positive emotions, positive emotions broaden while negative emotions narrow thought-action repertoires. These processes reflect changes in attentional scope, which is the focus of this research. The present study tested the hypothesis that participants in negative mood would be better able to focus on a target figure and separate it from its context in a perceptual task, and would also be better able to focus on the task amid a distracting environment than participants in a positive mood. An undergraduate sample of 77 participants watched video clips selected to induce either fear or amusement, and completed an Embedded Figures Test either in a quiet setting or in a noisy setting. A higher-order ANOVA revealed that Mood had a marginally significant effect on task performance, F(1, 73) = 3.94, p = .051, and that Distraction, F(1, 72) = 4.61, p = .035 and the Mood x Distraction interaction, F(1, 73) = 9.12, p = .003 did significantly affect task performance. However, contrary to the hypothesis, the effect of the distraction manipulation was greater for participants in a negative mood than it was for participants in a positive mood. The author suggests future directions to clarify the relationship between emotions, attentional scope, and susceptibility to environmental distraction.
Resumo:
A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.
Resumo:
The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.
Resumo:
One of the original ocean-bottom time-lapse seismic studies was performed at the Teal South oil field in the Gulf of Mexico during the late 1990’s. This work reexamines some aspects of previous work using modern analysis techniques to provide improved quantitative interpretations. Using three-dimensional volume visualization of legacy data and the two phases of post-production time-lapse data, I provide additional insight into the fluid migration pathways and the pressure communication between different reservoirs, separated by faults. This work supports a conclusion from previous studies that production from one reservoir caused regional pressure decline that in turn resulted in liberation of gas from multiple surrounding unproduced reservoirs. I also provide an explanation for unusual time-lapse changes in amplitude-versus-offset (AVO) data related to the compaction of the producing reservoir which, in turn, changed an isotropic medium to an anisotropic medium. In the first part of this work, I examine regional changes in seismic response due to the production of oil and gas from one reservoir. The previous studies primarily used two post-production ocean-bottom surveys (Phase I and Phase II), and not the legacy streamer data, due to the unavailability of legacy prestack data and very different acquisition parameters. In order to incorporate the legacy data in the present study, all three poststack data sets were cross-equalized and examined using instantaneous amplitude and energy volumes. This approach appears quite effective and helps to suppress changes unrelated to production while emphasizing those large-amplitude changes that are related to production in this noisy (by current standards) suite of data. I examine the multiple data sets first by using the instantaneous amplitude and energy attributes, and then also examine specific apparent time-lapse changes through direct comparisons of seismic traces. In so doing, I identify time-delays that, when corrected for, indicate water encroachment at the base of the producing reservoir. I also identify specific sites of leakage from various unproduced reservoirs, the result of regional pressure blowdown as explained in previous studies; those earlier studies, however, were unable to identify direct evidence of fluid movement. Of particular interest is the identification of one site where oil apparently leaked from one reservoir into a “new” reservoir that did not originally contain oil, but was ideally suited as a trap for fluids leaking from the neighboring spill-point. With continued pressure drop, oil in the new reservoir increased as more oil entered into the reservoir and expanded, liberating gas from solution. Because of the limited volume available for oil and gas in that temporary trap, oil and gas also escaped from it into the surrounding formation. I also note that some of the reservoirs demonstrate time-lapse changes only in the “gas cap” and not in the oil zone, even though gas must be coming out of solution everywhere in the reservoir. This is explained by interplay between pore-fluid modulus reduction by gas saturation decrease and dry-frame modulus increase by frame stiffening. In the second part of this work, I examine various rock-physics models in an attempt to quantitatively account for frame-stiffening that results from reduced pore-fluid pressure in the producing reservoir, searching for a model that would predict the unusual AVO features observed in the time-lapse prestack and stacked data at Teal South. While several rock-physics models are successful at predicting the time-lapse response for initial production, most fail to match the observations for continued production between Phase I and Phase II. Because the reservoir was initially overpressured and unconsolidated, reservoir compaction was likely significant, and is probably accomplished largely by uniaxial strain in the vertical direction; this implies that an anisotropic model may be required. Using Walton’s model for anisotropic unconsolidated sand, I successfully model the time-lapse changes for all phases of production. This observation may be of interest for application to other unconsolidated overpressured reservoirs under production.
Resumo:
Background Cardiac arrests are handled by teams rather than by individual health-care workers. Recent investigations demonstrate that adherence to CPR guidelines can be less than optimal, that deviations from treatment algorithms are associated with lower survival rates, and that deficits in performance are associated with shortcomings in the process of team-building. The aim of this study was to explore and quantify the effects of ad-hoc team-building on the adherence to the algorithms of CPR among two types of physicians that play an important role as first responders during CPR: general practitioners and hospital physicians. Methods To unmask team-building this prospective randomised study compared the performance of preformed teams, i.e. teams that had undergone their process of team-building prior to the onset of a cardiac arrest, with that of teams that had to form ad-hoc during the cardiac arrest. 50 teams consisting of three general practitioners each and 50 teams consisting of three hospital physicians each, were randomised to two different versions of a simulated witnessed cardiac arrest: the arrest occurred either in the presence of only one physician while the remaining two physicians were summoned to help ("ad-hoc"), or it occurred in the presence of all three physicians ("preformed"). All scenarios were videotaped and performance was analysed post-hoc by two independent observers. Results Compared to preformed teams, ad-hoc forming teams had less hands-on time during the first 180 seconds of the arrest (93 ± 37 vs. 124 ± 33 sec, P < 0.0001), delayed their first defibrillation (67 ± 42 vs. 107 ± 46 sec, P < 0.0001), and made less leadership statements (15 ± 5 vs. 21 ± 6, P < 0.0001). Conclusion Hands-on time and time to defibrillation, two performance markers of CPR with a proven relevance for medical outcome, are negatively affected by shortcomings in the process of ad-hoc team-building and particularly deficits in leadership. Team-building has thus to be regarded as an additional task imposed on teams forming ad-hoc during CPR. All physicians should be aware that early structuring of the own team is a prerequisite for timely and effective execution of CPR.
Resumo:
The effects of electron beam irradiation, anaerobic packaging, and storage times on the aroma of raw ground beef patties were investigated. Patties were coarse ground at three days postmortem, and then fine ground and packaged at three, six, and nine days postmortem. Patties were irradiated immediately after packaging, or three days after packaging at 2 kGy, and then stored at 2.5 °C ñ1.5 °C for four days. Non-irradiated controls were held under similar conditions. After four days of storage for each postmortem time (three, six, and nine days), sensory aroma evaluations were performed on all samples. Irradiated and non-irradiated patties with the shortest postmortem storage times had the most desirable aroma scores. Controls had significantly (p £ .05) more desirable aroma scores than irradiated patties.
Resumo:
The purpose of the present study was to investigate whether amnesic patients show a bivalency effect. The bivalency effect refers to the performance slowing that occurs when switching tasks and bivalent stimuli appear occasionally among univalent stimuli. According to the episodic context binding account, bivalent stimuli create a conflict-loaded context that is re-activated on subsequent trials and thus it is assumed that it depends on memory binding processes. Given the profound memory deficit in amnesia, we hypothesized that the bivalency effect would be largely reduced in amnesic patients. We tested sixteen severely amnesic patients and a control group with a paradigm requiring predictable alternations between three simple cognitive tasks, with bivalent stimuli occasionally occurring on one of these tasks. The results showed the typical bivalency effect for the control group, that is, a generalized slowing for each task. In contrast, for amnesic patients, only a short-lived slowing was present on the task that followed immediately after a bivalent stimulus, indicating that the binding between tasks and context was impaired in amnesic patients.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
This study investigated the empirical differentiation of prospective memory, executive functions, and metacognition and their structural relationships in 119 elementary school children (M = 95 months, SD = 4.8 months). These cognitive abilities share many characteristics on the theoretical level and are all highly relevant in many everyday contexts when intentions must be executed. Nevertheless, their empirical relationships have not been examined on the latent level, although an empirical approach would contribute to our knowledge concerning the differentiation of cognitive abilities during childhood. We administered a computerized event-based prospective memory task, three executive function tasks (updating, inhibition, shifting), and a metacognitive control task in the context of spelling. Confirmatory factor analysis revealed that the three cognitive abilities are already empirically differentiable in young elementary school children. At the same time, prospective memory and executive functions were found to be strongly related, and there was also a close link between prospective memory and metacognitive control. Furthermore, executive functions and metacognitive control were marginally significantly related. The findings are discussed within a framework of developmental differentiation and conceptual similarities and differences.
Resumo:
It is well accepted that the hippocampus (HIP) is important for spatial and contextual memories, however, it is not clear if the entorhinal cortex (EC), the main input/output structure for the hippocampus, is also necessary for memory storage. Damage to the EC in humans results in memory deficits. However, animal studies report conflicting results on whether the EC is necessary for spatial and contextual memory. Memory consolidation requires gene expression and protein synthesis, mediated by signaling cascades and transcription factors. Extracellular-signal regulated kinase (ERK) cascade activity is necessary for long-term memory in several tasks, including those that test spatial and contextual memory. In this work, we explore the role of ERK-mediated plasticity in the EC on spatial and contextual memory. ^ To evaluate this role, post-training infusions of reversible pharmacological inhibitors specific for the ERK cascade that do not affect normal neuronal activity were targeted directly to the EC of awake, behaving animals. This technique provides spatial and temporal control over the inhibition of the ERK cascade without affecting performance during training or testing. Using the Morris water maze to study spatial memory, we found that ERK inhibition in the EC resulted in long-term memory deficits consistent with a loss of spatial strategy information. When animals were allowed to learn and consolidate a spatial strategy for solving the task prior to training and ERK inhibition, the deficit was alleviated. To study contextual memory, we trained animals in a cued fear-conditioning task and saw an increase in the activation of ERK in the EC 90 minutes following training. ERK inhibition in the EC over this time point, but not at an earlier time point, resulted in increased freezing to the context, but not to the tone, during a 48-hour retention test. In addition, animals froze maximally at the time the shock was given during training; similar to naïve animals receiving additional training, suggesting that ERK-mediated plasticity in the EC normally suppresses the temporal nature of the freezing response. These findings demonstrate that plasticity in the EC is necessary for both spatial and contextual memory, specifically in the retention of behavioral strategies. ^
Resumo:
Considering the broader context of school reform that is seeking education strategies that might deliver substantial impact, this article examines four questions related to the policy and practice of expanding learning time: (a) why do educators find the standard American school calendar insufficient to meet students’ educational needs, especially those of disadvantaged students? (b) how do educators implement a longer day and/or year, addressing concerns about both educational quality and costs? (c) what does research report about outcomes of expanding time in schools? and (d) what are the future prospects for increasing the number of expanded-time schools? The paper examines these questions by considering research, policy, and practice at the national level and, throughout, by drawing upon additional evidence from Massachusetts, one of the leading states in the expanded-time movement. In considering the latter two questions, the article explores the knowns and unknowns related to expanded learning time and offers suggestions for further research.
Resumo:
This paper presents some fundamental properties of independent and-parallelism and extends its applicability by enlarging the class of goals eligible for parallel execution. A simple model of (independent) and-parallel execution is proposed and issues of correctness and efficiency discussed in the light of this model. Two conditions, "strict" and "non-strict" independence, are defined and then proved sufficient to ensure correctness and efñciency of parallel execution: if goals which meet these conditions are executed in parallel the solutions obtained are the same as those produced by standard sequential execution. Also, in absence of failure, the parallel proof procedure does not genérate any additional work (with respect to standard SLD-resolution) while the actual execution time is reduced. Finally, in case of failure of any of the goals no slow down will occur. For strict independence the results are shown to hold independently of whether the parallel goals execute in the same environment or in sepárate environments. In addition, a formal basis is given for the automatic compile-time generation of independent and-parallelism: compile-time conditions to efficiently check goal independence at run-time are proposed and proved sufficient. Also, rules are given for constructing simpler conditions if information regarding the binding context of the goals to be executed in parallel is available to the compiler.
Resumo:
En esta Tesis se presentan dos líneas de investigación relacionadas y que contribuyen a las áreas de Interacción Hombre-Tecnología (o Máquina; siglas en inglés: HTI o HMI), lingüística computacional y evaluación de la experiencia del usuario. Las dos líneas en cuestión son el diseño y la evaluación centrada en el usuario de sistemas de Interacción Hombre-Máquina avanzados. En la primera parte de la Tesis (Capítulos 2 a 4) se abordan cuestiones fundamentales del diseño de sistemas HMI avanzados. El Capítulo 2 presenta una panorámica del estado del arte de la investigación en el ámbito de los sistemas conversacionales multimodales, con la que se enmarca el trabajo de investigación presentado en el resto de la Tesis. Los Capítulos 3 y 4 se centran en dos grandes aspectos del diseño de sistemas HMI: un gestor del diálogo generalizado para tratar la Interacción Hombre-Máquina multimodal y sensible al contexto, y el uso de agentes animados personificados (ECAs) para mejorar la robustez del diálogo, respectivamente. El Capítulo 3, sobre gestión del diálogo, aborda el tratamiento de la heterogeneidad de la información proveniente de las modalidades comunicativas y de los sensores externos. En este capítulo se propone, en un nivel de abstracción alto, una arquitectura para la gestión del diálogo con influjos heterogéneos de información, apoyándose en el uso de State Chart XML. En el Capítulo 4 se presenta una contribución a la representación interna de intenciones comunicativas, y su traducción a secuencias de gestos a ejecutar por parte de un ECA, diseñados específicamente para mejorar la robustez en situaciones de diálogo críticas que pueden surgir, por ejemplo, cuando se producen errores de entendimiento en la comunicación entre el usuario humano y la máquina. Se propone, en estas páginas, una extensión del Functional Mark-up Language definido en el marco conceptual SAIBA. Esta extensión permite representar actos comunicativos que realizan intenciones del emisor (la máquina) que no se pretende sean captadas conscientemente por el receptor (el usuario humano), pero con las que se pretende influirle a éste e influir el curso del diálogo. Esto se consigue mediante un objeto llamado Base de Intenciones Comunicativas (en inglés, Communication Intention Base, o CIB). La representación en el CIB de intenciones “no claradas” además de las explícitas permite la construcción de actos comunicativos que realizan simultáneamente varias intenciones comunicativas. En el Capítulo 4 también se describe un sistema experimental para el control remoto (simulado) de un asistente domótico, con autenticación de locutor para dar acceso, y con un ECA en el interfaz de cada una de estas tareas. Se incluye una descripción de las secuencias de comportamiento verbal y no verbal de los ECAs, que fueron diseñados específicamente para determinadas situaciones con objeto de mejorar la robustez del diálogo. Los Capítulos 5 a 7 conforman la parte de la Tesis dedicada a la evaluación. El Capítulo 5 repasa antecedentes relevantes en la literatura de tecnologías de la información en general, y de sistemas de interacción hablada en particular. Los principales antecedentes en el ámbito de la evaluación de la interacción sobre los cuales se ha desarrollado el trabajo presentado en esta Tesis son el Technology Acceptance Model (TAM), la herramienta Subjective Assessment of Speech System Interfaces (SASSI), y la Recomendación P.851 de la ITU-T. En el Capítulo 6 se describen un marco y una metodología de evaluación aplicados a la experiencia del usuario con sistemas HMI multimodales. Se desarrolló con este propósito un novedoso marco de evaluación subjetiva de la calidad de la experiencia del usuario y su relación con la aceptación por parte del mismo de la tecnología HMI (el nombre dado en inglés a este marco es Subjective Quality Evaluation Framework). En este marco se articula una estructura de clases de factores subjetivos relacionados con la satisfacción y aceptación por parte del usuario de la tecnología HMI propuesta. Esta estructura, tal y como se propone en la presente tesis, tiene dos dimensiones ortogonales. Primero se identifican tres grandes clases de parámetros relacionados con la aceptación por parte del usuario: “agradabilidad ” (likeability: aquellos que tienen que ver con la experiencia de uso, sin entrar en valoraciones de utilidad), rechazo (los cuales sólo pueden tener una valencia negativa) y percepción de utilidad. En segundo lugar, este conjunto clases se reproduce para distintos “niveles, o focos, percepción del usuario”. Éstos incluyen, como mínimo, un nivel de valoración global del sistema, niveles correspondientes a las tareas a realizar y objetivos a alcanzar, y un nivel de interfaz (en los casos propuestos en esta tesis, el interfaz es un sistema de diálogo con o sin un ECA). En el Capítulo 7 se presenta una evaluación empírica del sistema descrito en el Capítulo 4. El estudio se apoya en los mencionados antecedentes en la literatura, ampliados con parámetros para el estudio específico de los agentes animados (los ECAs), la auto-evaluación de las emociones de los usuarios, así como determinados factores de rechazo (concretamente, la preocupación por la privacidad y la seguridad). También se evalúa el marco de evaluación subjetiva de la calidad propuesto en el capítulo anterior. Los análisis de factores efectuados revelan una estructura de parámetros muy cercana conceptualmente a la división de clases en utilidad-agradabilidad-rechazo propuesta en dicho marco, resultado que da cierta validez empírica al marco. Análisis basados en regresiones lineales revelan estructuras de dependencias e interrelación entre los parámetros subjetivos y objetivos considerados. El efecto central de mediación, descrito en el Technology Acceptance Model, de la utilidad percibida sobre la relación de dependencia entre la intención de uso y la facilidad de uso percibida, se confirma en el estudio presentado en la presente Tesis. Además, se ha encontrado que esta estructura de relaciones se fortalece, en el estudio concreto presentado en estas páginas, si las variables consideradas se generalizan para cubrir más ampliamente las categorías de agradabilidad y utilidad contempladas en el marco de evaluación subjetiva de calidad. Se ha observado, asimismo, que los factores de rechazo aparecen como un componente propio en los análisis de factores, y además se distinguen por su comportamiento: moderan la relación entre la intención de uso (que es el principal indicador de la aceptación del usuario) y su predictor más fuerte, la utilidad percibida. Se presentan también resultados de menor importancia referentes a los efectos de los ECAs sobre los interfaces de los sistemas de diálogo y sobre los parámetros de percepción y las valoraciones de los usuarios que juegan un papel en conformar su aceptación de la tecnología. A pesar de que se observa un rendimiento de la interacción dialogada ligeramente mejor con ECAs, las opiniones subjetivas son muy similares entre los dos grupos experimentales (uno interactuando con un sistema de diálogo con ECA, y el otro sin ECA). Entre las pequeñas diferencias encontradas entre los dos grupos destacan las siguientes: en el grupo experimental sin ECA (es decir, con interfaz sólo de voz) se observó un efecto más directo de los problemas de diálogo (por ejemplo, errores de reconocimiento) sobre la percepción de robustez, mientras que el grupo con ECA tuvo una respuesta emocional más positiva cuando se producían problemas. Los ECAs parecen generar inicialmente expectativas más elevadas en cuanto a las capacidades del sistema, y los usuarios de este grupo se declaran más seguros de sí mismos en su interacción. Por último, se observan algunos indicios de efectos sociales de los ECAs: la “amigabilidad ” percibida los ECAs estaba correlada con un incremento la preocupación por la seguridad. Asimismo, los usuarios del sistema con ECAs tendían más a culparse a sí mismos, en lugar de culpar al sistema, de los problemas de diálogo que pudieran surgir, mientras que se observó una ligera tendencia opuesta en el caso de los usuarios del sistema con interacción sólo de voz. ABSTRACT This Thesis presents two related lines of research work contributing to the general fields of Human-Technology (or Machine) Interaction (HTI, or HMI), computational linguistics, and user experience evaluation. These two lines are the design and user-focused evaluation of advanced Human-Machine (or Technology) Interaction systems. The first part of the Thesis (Chapters 2 to 4) is centred on advanced HMI system design. Chapter 2 provides a background overview of the state of research in multimodal conversational systems. This sets the stage for the research work presented in the rest of the Thesis. Chapers 3 and 4 focus on two major aspects of HMI design in detail: a generalised dialogue manager for context-aware multimodal HMI, and embodied conversational agents (ECAs, or animated agents) to improve dialogue robustness, respectively. Chapter 3, on dialogue management, deals with how to handle information heterogeneity, both from the communication modalities or from external sensors. A highly abstracted architectural contribution based on State Chart XML is proposed. Chapter 4 presents a contribution for the internal representation of communication intentions and their translation into gestural sequences for an ECA, especially designed to improve robustness in critical dialogue situations such as when miscommunication occurs. We propose an extension of the functionality of Functional Mark-up Language, as envisaged in much of the work in the SAIBA framework. Our extension allows the representation of communication acts that carry intentions that are not for the interlocutor to know of, but which are made to influence him or her as well as the flow of the dialogue itself. This is achieved through a design element we have called the Communication Intention Base. Such r pr s ntation of “non- clar ” int ntions allows th construction of communication acts that carry several communication intentions simultaneously. Also in Chapter 4, an experimental system is described which allows (simulated) remote control to a home automation assistant, with biometric (speaker) authentication to grant access, featuring embodied conversation agents for each of the tasks. The discussion includes a description of the behavioural sequences for the ECAs, which were designed for specific dialogue situations with particular attention given to the objective of improving dialogue robustness. Chapters 5 to 7 form the evaluation part of the Thesis. Chapter 5 reviews evaluation approaches in the literature for information technologies, as well as in particular for speech-based interaction systems, that are useful precedents to the contributions of the present Thesis. The main evaluation precedents on which the work in this Thesis has built are the Technology Acceptance Model (TAM), the Subjective Assessment of Speech System Interfaces (SASSI) tool, and ITU-T Recommendation P.851. Chapter 6 presents the author’s work in establishing an valuation framework and methodology applied to the users’ experience with multimodal HMI systems. A novel user-acceptance Subjective Quality Evaluation Framework was developed by the author specifically for this purpose. A class structure arises from two orthogonal sets of dimensions. First we identify three broad classes of parameters related with user acceptance: likeability factors (those that have to do with the experience of using the system), rejection factors (which can only have a negative valence) and perception of usefulness. Secondly, the class structure is further broken down into several “user perception levels”; at the very least: an overall system-assessment level, task and goal-related levels, and an interface level (e.g., a dialogue system with or without an ECA). An empirical evaluation of the system described in Chapter 4 is presented in Chapter 7. The study was based on the abovementioned precedents in the literature, expanded with categories covering the inclusion of an ECA, the users’ s lf-assessed emotions, and particular rejection factors (privacy and security concerns). The Subjective Quality Evaluation Framework proposed in the previous chapter was also scrutinised. Factor analyses revealed an item structure very much related conceptually to the usefulness-likeability-rejection class division introduced above, thus giving it some empirical weight. Regression-based analysis revealed structures of dependencies, paths of interrelations, between the subjective and objective parameters considered. The central mediation effect, in the Technology Acceptance Model, of perceived usefulness on the dependency relationship of intention-to-use with perceived ease of use was confirmed in this study. Furthermore, the pattern of relationships was stronger for variables covering more broadly the likeability and usefulness categories in the Subjective Quality Evaluation Framework. Rejection factors were found to have a distinct presence as components in factor analyses, as well as distinct behaviour: they were found to moderate the relationship between intention-to-use (the main measure of user acceptance) and its strongest predictor, perceived usefulness. Insights of secondary importance are also given regarding the effect of ECAs on the interface of spoken dialogue systems and the dimensions of user perception and judgement attitude that may have a role in determining user acceptance of the technology. Despite observing slightly better performance values in the case of the system with the ECA, subjective opinions regarding both systems were, overall, very similar. Minor differences between two experimental groups (one interacting with an ECA, the other only through speech) include a more direct effect of dialogue problems (e.g., non-understandings) on perceived dialogue robustness for the voice-only interface test group, and a more positive emotional response for the ECA test group. Our findings further suggest that the ECA generates higher initial expectations, and users seem slightly more confident in their interaction with the ECA than do those without it. Finally, mild evidence of social effects of ECAs was also found: the perceived friendliness of the ECA increased security concerns, and ECA users may tend to blame themselves rather than the system when dialogue problems are encountered, while the opposite may be true for voice-only users.
Resumo:
As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.
Resumo:
Hippocampal slices are used to show that, as a temporal input pattern of activity flows through a neuronal layer, a temporal-to-spatial transformation takes place. That is, neurons can respond selectively to the first or second of a pair of input pulses, thus transforming different temporal patterns of activity into the activity of different neurons. This is demonstrated using associative long-term potentiation of polysynaptic CA1 responses as an activity-dependent marker: by depolarizing a postsynaptic CA1 neuron exclusively with the first or second of a pair of pulses from the dentate gyrus, it is possible to “tag” different subpopulations of CA3 neurons. This technique allows sampling of a population of neurons without recording simultaneously from multiple neurons. Furthermore, it reflects a biologically plausible mechanism by which single neurons may develop selective responses to time-varying stimuli and permits the induction of context-sensitive synaptic plasticity. These experimental results support the view that networks of neurons are intrinsically able to process temporal information and that it is not necessary to invoke the existence of internal clocks or delay lines for temporal processing on the time scale of tens to hundreds of milliseconds.