931 resultados para User interfaces (Computer systems)
Resumo:
Aim: To determine the prevalence and nature of prescribing errors in general practice; to explore the causes, and to identify defences against error. Methods: 1) Systematic reviews; 2) Retrospective review of unique medication items prescribed over a 12 month period to a 2% sample of patients from 15 general practices in England; 3) Interviews with 34 prescribers regarding 70 potential errors; 15 root cause analyses, and six focus groups involving 46 primary health care team members Results: The study involved examination of 6,048 unique prescription items for 1,777 patients. Prescribing or monitoring errors were detected for one in eight patients, involving around one in 20 of all prescription items. The vast majority of the errors were of mild to moderate severity, with one in 550 items being associated with a severe error. The following factors were associated with increased risk of prescribing or monitoring errors: male gender, age less than 15 years or greater than 64 years, number of unique medication items prescribed, and being prescribed preparations in the following therapeutic areas: cardiovascular, infections, malignant disease and immunosuppression, musculoskeletal, eye, ENT and skin. Prescribing or monitoring errors were not associated with the grade of GP or whether prescriptions were issued as acute or repeat items. A wide range of underlying causes of error were identified relating to the prescriber, patient, the team, the working environment, the task, the computer system and the primary/secondary care interface. Many defences against error were also identified, including strategies employed by individual prescribers and primary care teams, and making best use of health information technology. Conclusion: Prescribing errors in general practices are common, although severe errors are unusual. Many factors increase the risk of error. Strategies for reducing the prevalence of error should focus on GP training, continuing professional development for GPs, clinical governance, effective use of clinical computer systems, and improving safety systems within general practices and at the interface with secondary care.
Resumo:
This chapter aims to provide an overview of building simulation in a theoretical and practical context. The following sections demonstrate the importance of simulation programs at a time when society is shifting towards a low carbon future and the practice of sustainable design becomes mandatory. The initial sections acquaint the reader with basic terminology and comment on the capabilities and categories of simulation tools before discussing the historical development of programs. The main body of the chapter considers the primary benefits and users of simulation programs, looks at the role of simulation in the construction process and examines the validity and interpretation of simulation results. The latter half of the chapter looks at program selection and discusses software capability, product characteristics, input data and output formats. The inclusion of a case study demonstrates the simulation procedure and key concepts. Finally, the chapter closes with a sight into the future, commenting on the development of simulation capability, user interfaces and how simulation will continue to empower building professionals as society faces new challenges in a rapidly changing landscape.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.
Resumo:
Embedded computer systems equipped with wireless communication transceivers are nowadays used in a vast number of application scenarios. Energy consumption is important in many of these scenarios, as systems are battery operated and long maintenance-free operation is required. To achieve this goal, embedded systems employ low-power communication transceivers and protocols. However, currently used protocols cannot operate efficiently when communication channels are highly erroneous. In this study, we show how average diversity combining (ADC) can be used in state-of-the-art low-power communication protocols. This novel approach improves transmission reliability and in consequence energy consumption and transmission latency in the presence of erroneous channels. Using a testbed, we show that highly erroneous channels are indeed a common occurrence in situations, where low-power systems are used and we demonstrate that ADC improves low-power communication dramatically.
Resumo:
Objective. This study was designed to determine the precision and accuracy of angular measurements using three-dimensional computed tomography (3D-CT) volume rendering by computer systems. Study design. The study population consisted of 28 dried skulls that were scanned with a 64-row multislice CT, and 3D-CT images were generated. Angular measurements, (n = 6) based upon conventional craniometric anatomical landmarks (n = 9), were identified independently in 3D-CT images by 2 radiologists, twice each, and were then performed by 3D-CT imaging. Subsequently, physical measurements were made by a third examiner using a Beyond Crysta-C9168 series 900 device. Results. The results demonstrated no statistically significant difference between interexaminer and intraexaminer analysis. The mean difference between the physical and 3-D-based angular measurements was -1.18% and -0.89%, respectively, for both examiners, demonstrating high accuracy. Conclusion. Maxillofacial analysis of angular measurements using 3D-CT volume rendering by 64-row multislice CT is established and can be used for orthodontic and dentofacial orthopedic applications.
Resumo:
The aim of this work was to encapsulate casein hydrolysate by complex coacervation with soybean protein isolate (SPI)/pectin. Three treatments were studied with wall material to core ratio of 1:1, 1:2 and 1:3. The samples were evaluated for morphological characteristics, moisture, hygroscopicity, solubility, hydrophobicity, surface tension, encapsulation efficiency and bitter taste with a trained sensory panel using a paired comparison test. The samples were very stable in cold water. The hydrophobicity decreased inversely with the hydrolysate content in the microcapsule. Encapsulated samples had lower hygroscopicity values than free hydrolysate. The encapsulation efficiency varied from 91.62% to 78.8%. Encapsulated samples had similar surface tension, higher values than free hydrolysate. The results of the sensory panel test considering the encapsulated samples less bitter (P < 0.05) than the free hydroly-state, showed that complex coacervation with SPI/pectin as wall material was an efficient method for microencapsulation and attenuation of the bitter taste of the hydrolysate. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Canonical Monte Carlo simulations for the Au(210)/H(2)O interface, using a force field recently proposed by us, are reported. The results exhibit the main features normally observed in simulations of water molecules in contact with different noble metal surfaces. The calculations also assess the influence of the surface topography on the structural aspects of the adsorbed water and on the distribution of the water molecules in the direction normal to the metal surface plane. The adsorption process is preferential at sites in the first layer of the metal. The analysis of the density profiles and dipole moment distributions points to two predominant orientations. Most of the molecules are adsorbed with the molecular plane parallel to surface, while others adsorb with one of the O-H bonds parallel to the surface and the other bond pointing towards the bulk liquid phase. There is also evidence of hydrogen bond formation between the first and second solvent layers at the interface. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson’s disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions (‘speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’) and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson’s Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping.
Resumo:
Användargränssnittsdesign är en stor del av en webbsidas intryck och förändras snabbt i takt med teknologins utveckling och trender. Men det gäller även att locka rätt användare. Hur lockar en webbsida till sig rätt målgrupp? Målet med rapporten var att analysera målgruppen gamers utifrån hur de upplever det grafiska användargränssnittet på en spelrelaterad webbsida. Rapporten behandlar även översiktitligt om aktuella trender för webbdesign tilltalar gamers. Undersökningen utfördes i två delar. I den första delen undersöktes 20 av de största spelrelaterade webbsidornas användargränssnitt. Under del två genomfördes en enkät och intervjuer med målgruppen gamers. I både enkäten och intervjuerna fick respondenterna ta ställning till olika mockups av en fiktiv webbsida. Det var stor skillnad på vad gamers ansåg vara tilltalande jämfört med hur de analyserade webbsidorna såg ut. Exempelvis var endast 25 % av de analyserade webbsidorna mörka medan 71,9 % av respondenterna föredrog en mörk layout.
Resumo:
The demands of image processing related systems are robustness, high recognition rates, capability to handle incomplete digital information, and magnanimous flexibility in capturing shape of an object in an image. It is exactly here that, the role of convex hulls comes to play. The objective of this paper is twofold. First, we summarize the state of the art in computational convex hull development for researchers interested in using convex hull image processing to build their intuition, or generate nontrivial models. Secondly, we present several applications involving convex hulls in image processing related tasks. By this, we have striven to show researchers the rich and varied set of applications they can contribute to. This paper also makes a humble effort to enthuse prospective researchers in this area. We hope that the resulting awareness will result in new advances for specific image recognition applications.
Resumo:
This paper elaborates the routing of cable cycle through available routes in a building in order to link a set of devices, in a most reasonable way. Despite of the similarities to other NP-hard routing problems, the only goal is not only to minimize the cost (length of the cycle) but also to increase the reliability of the path (in case of a cable cut) which is assessed by a risk factor. Since there is often a trade-off between the risk and length factors, a criterion for ranking candidates and deciding the most reasonable solution is defined. A set of techniques is proposed to perform an efficient and exact search among candidates. A novel graph is introduced to reduce the search-space, and navigate the search toward feasible and desirable solutions. Moreover, admissible heuristic length estimation helps to early detection of partial cycles which lead to unreasonable solutions. The results show that the method provides solutions which are both technically and financially reasonable. Furthermore, it is proved that the proposed techniques are very efficient in reducing the computational time of the search to a reasonable amount.
Resumo:
The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any 'thing', whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community-effort to achieve inter-operability in the Provenance Challenge series.
Resumo:
A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.