928 resultados para user study
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
El uso de barras de materiales compuestos (FRP) se propone como una alternativa efectiva para las tradicionales estructuras de hormigón armadas con acero que sufren corrosión en ambientes agresivos. La aceptación de estos materiales en el mundo de la construcción está condicionada a la compresión de su comportamiento estructural. Este trabajo estudia el comportamiento adherente entre barras de FRP y hormigón mediante dos programas experimentales. El primero incluye la caracterización de la adherencia entre barras de FRP y hormigón mediante ensayos de pull-out y el segundo estudia el proceso de fisuración de tirantes de hormigón reforzados con barras de GFRP mediante ensayo a tracción directa. El trabajo se concluye con el desarrollo de un modelo numérico para la simulación del comportamiento de elementos de hormigón reforzado bajo cargas de tracción. La flexibilidad del modelo lo convierte en una herramienta flexible para la realización de un estudio paramétrico sobre las variables que influyen en el proceso de fisuración.
Resumo:
En años recientes,la Inteligencia Artificial ha contribuido a resolver problemas encontrados en el desempeño de las tareas de unidades informáticas, tanto si las computadoras están distribuidas para interactuar entre ellas o en cualquier entorno (Inteligencia Artificial Distribuida). Las Tecnologías de la Información permiten la creación de soluciones novedosas para problemas específicos mediante la aplicación de los hallazgos en diversas áreas de investigación. Nuestro trabajo está dirigido a la creación de modelos de usuario mediante un enfoque multidisciplinario en los cuales se emplean los principios de la psicología, inteligencia artificial distribuida, y el aprendizaje automático para crear modelos de usuario en entornos abiertos; uno de estos es la Inteligencia Ambiental basada en Modelos de Usuario con funciones de aprendizaje incremental y distribuido (conocidos como Smart User Model). Basándonos en estos modelos de usuario, dirigimos esta investigación a la adquisición de características del usuario importantes y que determinan la escala de valores dominantes de este en aquellos temas en los cuales está más interesado, desarrollando una metodología para obtener la Escala de Valores Humanos del usuario con respecto a sus características objetivas, subjetivas y emocionales (particularmente en Sistemas de Recomendación).Una de las áreas que ha sido poco investigada es la inclusión de la escala de valores humanos en los sistemas de información. Un Sistema de Recomendación, Modelo de usuario o Sistemas de Información, solo toman en cuenta las preferencias y emociones del usuario [Velásquez, 1996, 1997; Goldspink, 2000; Conte and Paolucci, 2001; Urban and Schmidt, 2001; Dal Forno and Merlone, 2001, 2002; Berkovsky et al., 2007c]. Por lo tanto, el principal enfoque de nuestra investigación está basado en la creación de una metodología que permita la generación de una escala de valores humanos para el usuario desde el modelo de usuario. Presentamos resultados obtenidos de un estudio de casos utilizando las características objetivas, subjetivas y emocionales en las áreas de servicios bancarios y de restaurantes donde la metodología propuesta en esta investigación fue puesta a prueba.En esta tesis, las principales contribuciones son: El desarrollo de una metodología que, dado un modelo de usuario con atributos objetivos, subjetivos y emocionales, se obtenga la Escala de Valores Humanos del usuario. La metodología propuesta está basada en el uso de aplicaciones ya existentes, donde todas las conexiones entre usuarios, agentes y dominios que se caracterizan por estas particularidades y atributos; por lo tanto, no se requiere de un esfuerzo extra por parte del usuario.
Resumo:
The primary objective of this study was to document the benefits and possible detriments of combining ipsilateral acoustic hearing in the cochlear implant ear of a patient with preserved low frequency residual hearing post cochlear implantation. The secondary aim was to examine the efficacy of various cochlear implant mapping and hearing aid fitting strategies in relation to electro-acoustic benefits.
Resumo:
This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.
Resumo:
Physical rehabilitation of brain injuries and strokes is a time consuming and costly process. Over the past decade several studies have emerged looking at the use of highly sophisticated technologies, such as robotics and virtual reality to tap into the needs of clinicians and patients. While such technologies can be a valuable tool to facilitate intensive movement practice in a motivating and engaging environment, success of therapy also depends on self-administered therapy beyond hospital stay. With the emergence of low-cost gaming consoles such as the Nintendo Wii, new opportunities arise for home-therapy paradigms centred on social interactions and values, which could reduce the sense of isolation and other depression related complications. In this paper we examine the potential, user acceptance and usability of an unmodified Nintendo Wii gaming console as a low-cost treatment alternative to complement current rehabilitation programmes.
Resumo:
Meeting the demand for independent living from the increasing number of older people presents a major challenge for society, government and the building industry. Older people's experience of disabling conditions can be affected by the design and layout of their accommodation. Adaptations and assistive technology (AT) are a major way of addressing this gap between functional capacity and the built environment. The degree of adaptability and the differences in the average cost of adaptation of different types of property are large and there is major variation within property type. Based on a series of user profiles, it was found that a comprehensive package of adaptations and AT is likely to result in significant economies arising from a reduction in the need for formal care services. This finding is sensitive to assumptions about how long an individual would use the adaptations and AT, as well as to the input of informal care and the nature of their accommodation. The present study, which focused on social housing, has implications for how practitioners specify ways of meeting individual needs as well as providing a case to support the substantial increase in demand for specialist adaptation work.
Resumo:
The paper is an investigation of the exchange of ideas and information between an architect and building users in the early stages of the building design process before the design brief or any drawings have been produced. The purpose of the research is to gain insight into the type of information users exchange with architects in early design conversations and to better understand the influence the format of design interactions and interactional behaviours have on the exchange of information. We report an empirical study of pre-briefing conversations in which the overwhelming majority of the exchanges were about the functional or structural attributes of space, discussion that touched on the phenomenological, perceptual and the symbolic meanings of space were rare. We explore the contextual features of meetings and the conversational strategies taken by the architect to prompt the users for information and the influence these had on the information provided. Recommendations are made on the format and structure of pre-briefing conversations and on designers' strategies for raising the level of information provided by the user beyond the functional or structural attributes of space.
Resumo:
Research in the late 1980s showed that in many corporate real estates users were not fully aware of the full extent of their property holdings. In many cases, not only was the value of the holdings unknown, but there was uncertainty over the actual extent of ownership within the portfolio. This resulted in a large number of corporate occupiers reviewing their property holdings during the 1990s, initially to create a definitive asset register, but also to benefit from an more efficient use of space. Good management of corporately owned property assets is of equal importance as the management of other principal resources within the company. A comprehensive asset register can be seen as the first step towards a rational property audit. For the effective, efficient and economic delivery of services, it is vital that all property holdings are utilised to the best advantage. This requires that the property provider and the property user are both fully conversant with the value of the property holding and that an asset/internal rent/charge is made accordingly. The advantages of internal rent charging are twofold. Firstly, it requires the occupying department to “contribute” an amount to the business equivalent to the open market rental value of the space that it occupies. This prevents the treating of space as a free good and, as individual profit centres, each department will then rationalise its holdings to minimise its costs. The second advantage is from a strategic viewpoint. By charging an asset rent, the holding department can identify the performance of its real estate holdings. This can then be compared to an internal or external benchmark to help determine whether the company has adopted the most efficient tenure pattern for its properties. This paper investigates the use of internal rents by UK-based corporate businesses and explains internal rents as a form of transfer pricing in the context of management and responsibility accounting. The research finds that the majority of charging organisations introduced internal rents primarily to help calculate true profits at the business unit level. However, less than 10% of the charging organisations introduced internal rents primarily to capture the return on assets within the business. There was also a sizeable element of the market who had no plans to introduce internal rents. Here, it appears that, despite academic and professional views that internal rents are beneficial in improving the efficient use of property, opinion at the business and operational level has not universally accepted this proposition.
Resumo:
Developed in response to the new challenges of the social Web, this study investigates how involvement with brand-related user-generated content (UGC) affects consumers’ perceptions of brands. The authors develop a model that provides new insights into the links between drivers of UGC creation, involvement, and consumer-based brand equity. Expert opinions were sought on a hypothesized model, which further was tested through data from an online survey of 202 consumers. The results provide guidance for managerial initiatives involving UGC campaigns for brand building. The findings indicate that consumer perceptions of co-creation, community, and self-concept have a positive impact on UGC involvement that, in turn, positively affects consumer based brand equity. These empirical results have significant implications for avoiding problems and building deeper relationships between consumers and brands in the age of social media.
Resumo:
This article presents the results of a study that explored the human side of the multimedia experience. We propose a model that assesses quality variation from three distinct levels: the network, the media and the content levels; and from two views: the technical and the user perspective. By facilitating parameter variation at each of the quality levels and from each of the perspectives, we were able to examine their impact on user quality perception. Results show that a significant reduction in frame rate does not proportionally reduce the user's understanding of the presentation independent of technical parameters, that multimedia content type significantly impacts user information assimilation, user level of enjoyment, and user perception of quality, and that the device display type impacts user information assimilation and user perception of quality. Finally, to ensure the transfer of information, low-level abstraction (network-level) parameters, such as delay and jitter, should be adapted; to maintain the user's level of enjoyment, high-level abstraction quality parameters (content-level), such as the appropriate use of display screens, should be adapted.
Resumo:
Distributed multimedia supports a symbiotic infotainment duality, i.e. the ability to transfer information to the user, yet also provide the user with a level of satisfaction. As multimedia is ultimately produced for the education and / or enjoyment of viewers, the user’s-perspective concerning the presentation quality is surely of equal importance as objective Quality of Service (QoS) technical parameters, to defining distributed multimedia quality. In order to extensively measure the user-perspective of multimedia video quality, we introduce an extended model of distributed multimedia quality that segregates quality into three discrete levels: the network-level, the media-level and content-level, using two distinct quality perspectives: the user-perspective and the technical-perspective. Since experimental questionnaires do not provide continuous monitoring of user attention, eye tracking was used in our study in order to provide a better understanding of the role that the human element plays in the reception, analysis and synthesis of multimedia data. Results showed that video content adaptation, results in disparity in user video eye-paths when: i) no single / obvious point of focus exists; or ii) when the point of attention changes dramatically. Accordingly, appropriate technical- and user-perspective parameter adaptation is implemented, for all quality abstractions of our model, i.e. network-level (via simulated delay and jitter), media-level (via a technical- and user-perspective manipulated region-of-interest attentive display) and content-level (via display-type and video clip-type). Our work has shown that user perception of distributed multimedia quality cannot be achieved by means of purely technical-perspective QoS parameter adaptation.
Resumo:
Introduction: Continuity of care has been demonstrated to be important for service users and carer groups have voiced major concerns over disruptions of care. We aimed to assess the experienced continuity of care in carers of patients with both psychotic and non-psychotic disorders and explore its association with carer characteristics and psychological well-being. Methods: Friends and relatives caring for two groups of service users in the care of community mental health teams (CMHTs), 69 with psychotic and 38 with non-psychotic disorders, were assessed annually at three and two time points, respectively. CONTINUES, a measure specifically designed to assess continuity of care for carers themselves, was utilized along with assessments of psychological well-being and caregiving. Results: One hundred and seven carers participated. They reported moderately low continuity of care. Only 22 had had a carer’s assessment and just under a third recorded psychological distress on the GHQ. For those caring for people with psychotic disorders, reported continuity was higher if the carer was male, employed, lived with the user and had had a carer’s assessment; for those caring for people with non-psychotic disorders, it was higher if the carer was from the service user’s immediate family, lived with them and had had a carer’s assessment. Conclusion: The vast majority of the carers had not had a carer’s assessment provided by the CMHT despite this being a clear national priority and being an intervention with obvious potential to increase carers’ reported low levels of continuity of care. Improving continuity of contact with carers may have an important part to play in the overall improvement of care in this patient group and deserves greater attention.
Resumo:
Sri Lanka's participation rates in higher education are low and have risen only slightly in the last few decades; the number of places for higher education in the state university system only caters for around 3% of the university entrant age cohort. The literature reveals that the highly competitive global knowledge economy increasingly favours workers with high levels of education who are also lifelong learners. This lack of access to higher education for a sizable proportion of the labour force is identified as a severe impediment to Sri Lanka‟s competitiveness in the global knowledge economy. The literature also suggests that Information and Communication Technologies are increasingly relied upon in many contexts in order to deliver flexible learning, to cater especially for the needs of lifelong learners in today‟s higher educational landscape. The government of Sri Lanka invested heavily in ICTs for distance education during the period 2003-2009 in a bid to increase access to higher education; but there has been little research into the impact of this. To address this lack, this study investigated the impact of ICTs on distance education in Sri Lanka with respect to increasing access to higher education. In order to achieve this aim, the research focused on Sri Lanka‟s effort from three perspectives: policy perspective, implementation perspective and user perspective. A multiple case study research using an ethnographic approach was conducted to observe Orange Valley University‟s and Yellow Fields University‟s (pseudonymous) implementation of distance education programmes using questionnaires, qualitative interviewing and document analysis. In total, data for the analysis was collected from 129 questionnaires, 33 individual interviews and 2 group interviews. The research revealed that ICTs have indeed increased opportunities for higher education; but mainly for people of affluent families from the Western Province. Issues identified were categorized under the themes: quality assurance, location, language, digital literacies and access to resources. Recommendations were offered to tackle the identified issues in accordance with the study findings. The study also revealed the strong presence of a multifaceted digital divide in the country. In conclusion, this research has shown that iii although ICT-enabled distance education has the potential to increase access to higher education the present implementation of the system in Sri Lanka has been less than successful.
Resumo:
The increasing use of social media, applications or platforms that allow users to interact online, ensures that this environment will provide a useful source of evidence for the forensics examiner. Current tools for the examination of digital evidence find this data problematic as they are not designed for the collection and analysis of online data. Therefore, this paper presents a framework for the forensic analysis of user interaction with social media. In particular, it presents an inter-disciplinary approach for the quantitative analysis of user engagement to identify relational and temporal dimensions of evidence relevant to an investigation. This framework enables the analysis of large data sets from which a (much smaller) group of individuals of interest can be identified. In this way, it may be used to support the identification of individuals who might be ‘instigators’ of a criminal event orchestrated via social media, or a means of potentially identifying those who might be involved in the ‘peaks’ of activity. In order to demonstrate the applicability of the framework, this paper applies it to a case study of actors posting to a social media Web site.