971 resultados para Computer-generated stimuli
Resumo:
The purpose of this study is to examine the psychographic (product attributes, motivation opinions, interest, lifestyle, values) characteristics of wine tourists along the Niagara wine r,~ute, located in Ontario, Canada, using a multiple case study method. Four wineries were selected, two wineries each on the East, and West sides of the wine route during the shoulder-season (January, February, 2004). Using a computer generated survey technique, tourists were approached to fill out a questionnaire on one of the available laptop computers, where a sample ofN=321 was obtained. The study findings revealed that there are three distinct wine tourist segments in the Niagara region. The segments were determined using an exploratory factor analysis (EFA) and a K-means cluster analysis: Wine Lovers, Wine Interested, and Wine Curious wine tourists. These three segments displayed significant differences in their, motivation for visiting a winery, lifestyles, values, and wine purchasing behaviour. This study also examined differences between winery locations, on the East and West sides of the Niagara wine route, with respect to the aforementioned variables. The results indicated that there were significant differences between the regions with respect to these variables. The findings suggest that these differences present opportunities for more effective marketing strategies based on the uniqueness of each region. The results of this study provide insight for academia into a method of psychographic market segmentation of wine tourists and consumer behaviour. This study also contributes to the literature on wine tourism, and the identification of psychographic characteristics of wine tourists, an area where little research has taken place.
Resumo:
La présente recherche a pour but de faire le point sur l'état du droit canadien et sur ses perspectives futures en relation avec les œuvres créées par ordinateurs. L'outil terminologique choisi pour notre objectif est le logiciel de traduction automatique multilingue qui, à cause de sa complexité, s'éloigne le plus du programmeur « créateur» et se rapproche le plus d'œuvres qui ne peuvent être directement attribuées aux linguistes et programmeurs. Ces outils et leurs créations seront d'après nous les prochains outils technologiques à confronter le droit. En effet, dans un avenir prévisible, considérant l'évolution technologique, ces logiciels produiront des textes qui bénéficieront d'une valeur commerciale ajoutée et c'est alors que certains feront valoir leurs « droits », non seulement sur les textes mais aussi sur la technologie. Pour atteindre cet objectif, nous débuterons par un retour historique sur la technologie et ses origines. Par la suite, nous ferons une analyse de la protection actuelle accordée aux logiciels, aux banques de données et aux traductions qu'ils produisent. Nous déterminerons ensuite qui sera responsable des textes produits en relation avec le texte d'origine et avec sa résultante au niveau du droit d'auteur et de celui de la responsabilité civile. Cette recherche nous amènera à conclure que le droit actuel est « mésadapté » tant à l'égard de la protection qu'au niveau de la responsabilité. Ces conclusions devront d'après nous imposer un retour aux principes fondamentaux du droit. Ce fondamentalisme légal sera pour nous le prix à payer pour la légitimité. En effet, plus particulièrement concernant le droit d'auteur, nous conclurons qu'il devra cesser d'être le « fourre-tout» du droit de la propriété intellectuelle et redevenir ce qu'il doit être: un droit qui protège la créativité. Cette démarche prospective tirera ses racines du fait que nous serons obligés de conclure que les juristes canadiens ont refusé, à tort à notre point de vue, de renvoyer au monde des brevets les méthodes et procédés nouveaux et inventifs, ce qui donc a introduit des problématiques inutiles qui exacerbent l'incertitude. Finalement, notre cheminement nous dirigera vers le droit de la responsabilité où nous soutiendrons que le fournisseur ne peut actuellement être responsable du texte produit puisqu'il ne participe pas directement aux choix et ne porte pas atteinte au contenu. Voici donc en quelques mots le cœur de notre recherche qui entrouvre une boîte de Pandore.
Resumo:
By enhancing a real scene with computer generated objects, Augmented Reality (AR), has proven itself as a valuable Human-Computer Interface (HCI) in numerous application areas such as medical, military, entertainment and manufacturing. It enables higher performance of on-site tasks with seamless presentation of up-to-date, task-related information to the users during the operation. AR has potentials in design because the current interface provided by Computer-aided Design (CAD) packages is less intuitive and reports show that the presence of physical objects help design thinking and communication. This research explores the use of AR to improve the efficiency of a design process, specifically in mechanical design.
Resumo:
Introducción: El ECG es una herramienta básica en el estudio del dolor torácico, no hay evidencia que demuestre si la interpretación electrocardiográfica de los especialistas de medicina interna y emergencias es similar a la de cardiólogos en casos de SCA. El propósito de este estudio es determinar si existe concordancia en interpretación de los hallazgos electrocardiográficos más frecuentes en la fase aguda de los síndromes coronarios. Metodología: Estudio retrospectivo de concordancia diagnóstica electrocardiográfica, realizado en un hospital universitario de cuarto nivel. Se escogieron los hallazgos electrocardiográficos más frecuentes en síndromes coronarios agudos para ser evaluados por 3 diferentes especialidades y se hizo el análisis de concordancia mediante el cálculo estadístico kappa. Resultados: Se analizaron 200 electrocardiogramas aleatorizados, de pacientes con SCA entre noviembre de 2012 a abril de 2013. La edad promedio fue 65,14 años, la mayoría hombres (62,5%), la hipertensión arterial y enfermedad coronaria fueron las comorbilidades más frecuentes. Se encontró un grado de concordancia moderada (k = 0.61 – 0.80, p <0.001) entre cardiólogos vs emergenciólogos y cardiólogos vs internistas, excepto en lesión subendocárdica (k = 0.11 y 0.24 respectivamente), hubo un grado de concordancia débil (k = 0.41 – 0.60, p <0.001) entre emergenciólogos e internistas. El hallazgo en el que hubo grado de concordancia muy bueno (k > 0.81) fue bloqueo de rama izquierda. Conclusión: Existe grado de concordancia moderada en la lectura electrocardiográfica en la mayoría de variables en relación con síndrome coronario agudo entre los especialistas de medicina interna y emergencias al compararlo con cardiólogos.
Resumo:
Explanations are an important by-product of medical decisionsupport activities, as they have proved to favour compliance and correct treatment performance. To achieve this purpose, these texts should have a strong argumentation content and should adapt to emotional, as well as to rational attitudes of the Addressee. This paper describes how Rhetorical Sentence Planning can contribute to this aim: the rulebased plan discourse revision is introduced between Text Planning and Linguistic Realization, and exploits knowledge about the user personality and emotions and about the potential impact of domain items on user compliance and memory recall. The proposed approach originates from analytical and empirical evaluation studies of computer generated explanation texts in the domain of drug prescription. This work was partially supported by a British-Italian Collaboration in Research and Higher Education Project, which involved the Universities of Reading and of Bari, in 1996.
Resumo:
This study evaluates computer-generated written explanations about drug prescriptions that are based on an analysis of both patient and doctor informational needs. Three experiments examine the effects of varying the type of information given about the possible side effects of the medication, and the order of information within the explanation. Experiment 1 investigated the effects of these two factors on people's ratings of how good they consider the explanations to be and of their perceived likelihood of taking the medication, as well as on their memory for the information in the explanation. Experiment 2 further examined the effects of varying information about side effects by separating out the contribution of number and severity of side effects. It was found that participants in this study did not “like” explanations that described severe side effects, and also judged that they would be less likely to take the medication if given such explanations. Experiment 3 therefore investigated whether information about severe side effects could be presented in such a way as to increase judgements of how good explanations are thought to be, as well as the perceived likelihood of adherence. The results showed some benefits of providing additional explanatory information.
Resumo:
In this paper we describe how we generated written explanations to ‘indirect users’ of a knowledge-based system in the domain of drug prescription. We call ‘indirect users’ the intended recipients of explanations, to distinguish them from the prescriber (the ‘direct’ user) who interacts with the system. The Explanation Generator was designed after several studies about indirect users' information needs and physicians' explanatory attitudes in this domain. It integrates text planning techniques with ATN-based surface generation. A double modeling component enables adapting the information content, order and style to the indirect user to whom explanation is addressed. Several examples of computer-generated texts are provided, and they are contrasted with the physicians' explanations to discuss advantages and limits of the approach adopted.
Resumo:
Three experiments examine the effect of different forms of computer-generated advice on concurrent and subsequent performance of individuals controlling a simulated intensive-care task. Experiment 1 investigates the effect of optional and compulsory advice and shows that both result in an improvement in subjects' performance while receiving the advice, and also in an improvement in subsequent unaided performance. However, although the advice compliance displayed by the optional advice group shows a strong correlation with subsequent unaided performance, compulsory advice has no extra benefit over the optional use of advice. Experiment 2 examines the effect of providing users with on-line explanations of the advice, as well as providing less specific advice. The results show that both groups perform at the same level on the task as the advice groups from Experiment 1, although subjects receiving explanations scored significantly higher on a written post-task questionnaire. Experiment 3 investigates in more detail the relationship between advice compliance and performance. The results reveal a complex relationship between natural ability on the task and the following of advice, in that people who use the advice more tend to perform either better or worse than the more moderate users. The theoretical and practical implications of these experiments are discussed.
Resumo:
Background: Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods: Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion: At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Resumo:
This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Background: Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods: In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-eff ectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings: 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months’ follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0∙58, 95% CI 0∙38–0∙89); a β blocker if they had asthma (0∙73, 0∙58–0∙91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0∙51, 0∙34–0∙78). PINCER has a 95% probability of being cost eff ective if the decision-maker’s ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation: The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding: Patient Safety Research Portfolio, Department of Health, England.
Resumo:
Augmented Reality systems overlay computer generated information onto a user's natural senses. Where this additional information is visual, the information is overlaid on the user's natural visual field of view through a head mounted (or “head-up”) display device. Integrated Home Systems provides a network that links every electrical device in the home which provides to a user both control and data transparency across the network.
Resumo:
Three topics are discussed. First, an issue in the epistemology of computer simulation - that of the chess endgame 'becoming' what computer-generated data says it is. Secondly, the endgames of the longest known games are discussed, and the concept of a Bionic Game is defined. Lastly, the set of record-depth positions published by Bourzutschky and Konoval are evaluated by the new MVL tables in Moscow - alongside the deepest known mate of 549 moves.
Resumo:
Autism spectrum conditions (autism) affect ~1% of the population and are characterized by deficits in social communication. Oxytocin has been widely reported to affect social-communicative function and its neural underpinnings. Here we report the first evidence that intranasal oxytocin administration improves a core problem that individuals with autism have in using eye contact appropriately in real-world social settings. A randomized double-blind, placebo-controlled, within-subjects design is used to examine how intranasal administration of 24 IU of oxytocin affects gaze behavior for 32 adult males with autism and 34 controls in a real-time interaction with a researcher. This interactive paradigm bypasses many of the limitations encountered with conventional static or computer-based stimuli. Eye movements are recorded using eye tracking, providing an objective measurement of looking patterns. The measure is shown to be sensitive to the reduced eye contact commonly reported in autism, with the autism group spending less time looking to the eye region of the face than controls. Oxytocin administration selectively enhanced gaze to the eyes in both the autism and control groups (transformed mean eye-fixation difference per second=0.082; 95% CI:0.025–0.14, P=0.006). Within the autism group, oxytocin has the most effect on fixation duration in individuals with impaired levels of eye contact at baseline (Cohen’s d=0.86). These findings demonstrate that the potential benefits of oxytocin in autism extend to a real-time interaction, providing evidence of a therapeutic effect in a key aspect of social communication.
Resumo:
The emergence and development of digital imaging technologies and their impact on mainstream filmmaking is perhaps the most familiar special effects narrative associated with the years 1981-1999. This is in part because some of the questions raised by the rise of the digital still concern us now, but also because key milestone films showcasing advancements in digital imaging technologies appear in this period, including Tron (1982) and its computer generated image elements, the digital morphing in The Abyss (1989) and Terminator 2: Judgment Day (1991), computer animation in Jurassic Park (1993) and Toy Story (1995), digital extras in Titanic (1997), and ‘bullet time’ in The Matrix (1999). As a result it is tempting to characterize 1981-1999 as a ‘transitional period’ in which digital imaging processes grow in prominence and technical sophistication, and what we might call ‘analogue’ special effects processes correspondingly become less common. But such a narrative risks eliding the other practices that also shape effects sequences in this period. Indeed, the 1980s and 1990s are striking for the diverse range of effects practices in evidence in both big budget films and lower budget productions, and for the extent to which analogue practices persist independently of or alongside digital effects work in a range of production and genre contexts. The chapter seeks to document and celebrate this diversity and plurality, this sustaining of earlier traditions of effects practice alongside newer processes, this experimentation with materials and technologies old and new in the service of aesthetic aspirations alongside budgetary and technical constraints. The common characterization of the period as a series of rapid transformations in production workflows, practices and technologies will be interrogated in relation to the persistence of certain key figures as Douglas Trumbull, John Dykstra, and James Cameron, but also through a consideration of the contexts for and influences on creative decision-making. Comparative analyses of the processes used to articulate bodies, space and scale in effects sequences drawn from different generic sites of special effects work, including science fiction, fantasy, and horror, will provide a further frame for the chapter’s mapping of the commonalities and specificities, continuities and variations in effects practices across the period. In the process, the chapter seeks to reclaim analogue processes’ contribution both to moments of explicit spectacle, and to diegetic verisimilitude, in the decades most often associated with the digital’s ‘arrival’.