908 resultados para user-friendliness


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To explore the user-friendliness and ergonomics of seven new generation intensive care ventilators. DESIGN: Prospective task-performing study. SETTING: Intensive care research laboratory, university hospital. METHODS: Ten physicians experienced in mechanical ventilation, but without prior knowledge of the ventilators, were asked to perform eight specific tasks [turning the ventilator on; recognizing mode and parameters; recognizing and setting alarms; mode change; finding and activating the pre-oxygenation function; pressure support setting; stand-by; finding and activating non-invasive ventilation (NIV) mode]. The time needed for each task was compared to a reference time (by trained physiotherapist familiar with the devices). A time >180 s was considered a task failure. RESULTS: For each of the tests on the ventilators, all physicians' times were significantly higher than the reference time (P < 0.001). A mean of 13 +/- 8 task failures (16%) was observed by the ventilator. The most frequently failed tasks were mode and parameter recognition, starting pressure support and finding the NIV mode. Least often failed tasks were turning on the pre-oxygenation function and alarm recognition and management. Overall, there was substantial heterogeneity between machines, some exhibiting better user-friendliness than others for certain tasks, but no ventilator was clearly better that the others on all points tested. CONCLUSIONS: The present study adds to the available literature outlining the ergonomic shortcomings of mechanical ventilators. These results suggest that closer ties between end-users and manufacturers should be promoted, at an early development phase of these machines, based on the scientific evaluation of the cognitive processes involved by users in the clinical setting.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Based on the paper presented at the International Conference “Autonomous Systems: inter-relations of technical and societal issues”, organized by IET with the support of the Portuguese-German collaboration project on “Technology Assessment of Autonomous Robotics” (DAAD/CRUP) at FCT-UNL, Biblioteca da UNL, Campus de Caparica on 5-6 November 2009.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing has recently become very popular, and several bioinformatics applications exist already in that domain. The aim of this article is to analyse a current cloud system with respect to usability, benchmark its performance and compare its user friendliness with a conventional cluster job submission system. Given the current hype on the theme, user expectations are rather high, but current results show that neither the price/performance ratio nor the usage model is very satisfactory for large-scale embarrassingly parallel applications. However, for small to medium scale applications that require CPU time at certain peak times the cloud is a suitable alternative.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: DNA sequence integrity, mRNA concentrations and protein-DNA interactions have been subject to genome-wide analyses based on microarrays with ever increasing efficiency and reliability over the past fifteen years. However, very recently novel technologies for Ultra High-Throughput DNA Sequencing (UHTS) have been harnessed to study these phenomena with unprecedented precision. As a consequence, the extensive bioinformatics environment available for array data management, analysis, interpretation and publication must be extended to include these novel sequencing data types. DESCRIPTION: MIMAS was originally conceived as a simple, convenient and local Microarray Information Management and Annotation System focused on GeneChips for expression profiling studies. MIMAS 3.0 enables users to manage data from high-density oligonucleotide SNP Chips, expression arrays (both 3'UTR and tiling) and promoter arrays, BeadArrays as well as UHTS data using MIAME-compliant standardized vocabulary. Importantly, researchers can export data in MAGE-TAB format and upload them to the EBI's ArrayExpress certified data repository using a one-step procedure. CONCLUSION: We have vastly extended the capability of the system such that it processes the data output of six types of GeneChips (Affymetrix), two different BeadArrays for mRNA and miRNA (Illumina) and the Genome Analyzer (a popular Ultra-High Throughput DNA Sequencer, Illumina), without compromising on its flexibility and user-friendliness. MIMAS, appropriately renamed into Multiomics Information Management and Annotation System, is currently used by scientists working in approximately 50 academic laboratories and genomics platforms in Switzerland and France. MIMAS 3.0 is freely available via http://multiomics.sourceforge.net/.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current applications of cardiac magnetic resonance (CMR) imaging offer a wide spectrum of indications in the setting of acute cardiac care. In particular, CMR is helpful for the differential diagnosis of chest pain by detection of myocarditis and pericarditis. Also, takotsubo cardiomyopathy and acute aortic diseases can be evaluated by CMR and are important differential diagnoses in patients with acute chest pain. In patients with restricted windows for echocardiography, CMR is the method of choice to evaluate complications of acute myocardial infarction (AMI). In AMI, CMR allows for a unique characterization of myocardial damage by quantifying necrosis, microvascular obstruction, oedema (=area at risk), and haemorrhage. These capabilities will help us to understand better the pathophysiological events during infarction and will also allow to assess new treatment strategies in AMI. To what extent the information on tissue damage will guide patient management is not yet clear and further research in this field is warranted. In the near future, CMR will certainly become more routine in acute cardiac care units, as manufacturers are now focusing strongly on this aspect of user-friendliness. Finally, in the next decade or so, MRI of other nuclei such as fluorine and carbon might become a clinical reality, which would allow for metabolic and targeted molecular imaging with excellent sensitivity and specificity

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of the study is: (1) to describe how nursing students' experienced their clinical learning environment and the supervision given by staff nurses working in hospital settings; and (2) to develop and test an evaluation scale of Clinical Learning Environment and Supervision (CLES). The study has been carried out in different phases. The pilot study (n=163) explored the association between the characteristics of a ward and its evaluation as a learning environment by students. The second version of research instrument (which was developed by the results of this pilot study) were tested by an expert panel (n=9 nurse teachers) and test-retest group formed by student nurses (n=38). After this evaluative phase, the CLES was formed as the basic research instrument for this study and it was tested with the Finnish main sample (n=416). In this phase, a concurrent validity instrument (Dunn & Burnett 1995) was used to confirm the validation process of CLES. The international comparative study was made by comparing the Finnish main sample with a British sample (n=142). The international comparative study was necessary for two reasons. In the instrument developing process, there is a need to test the new instrument in some other nursing culture. Other reason for comparative international study is the reflecting the impact of open employment markets in the European Union (EU) on the need to evaluate and to integrate EU health care educational systems. The results showed that the individualised supervision system is the most used supervision model and the supervisory relationship with personal mentor is the most meaningful single element of supervision evaluated by nursing students. The ward atmosphere and the management style of ward manager are the most important environmental factors of the clinical ward. The study integrates two theoretical elements - learning environment and supervision - in developing a preliminary theoretical model. The comparative international study showed that, Finnish students were more satisfied and evaluated their clinical placements and supervision with higher scores than students in the United Kingdom (UK). The difference between groups was statistical highly significant (p= 0.000). In the UK, clinical placements were longer but students met their nurse teachers less frequently than students in Finland. Arrangements for supervision were similar. This research process has produced the evaluation scale (CLES), which can be used in research and quality assessments of clinical learning environment and supervision in Finland and in the UK. CLES consists of 27 items and it is sub-divided into five sub-dimensions. Cronbach's alpha coefficient varied from high 0.94 to marginal 0.73. CLES is a compact evaluation scale and user-friendliness makes it suitable for continuing evaluation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Internet and new communication technologies are deeply affecting healthcare systems and the provision of care. The purpose of this article is to evaluate the possibility that cyberhealth, via the development of widespread easy access to wireless personal computers, tablets and smartphones, can effectively influence intake of medication and long-term medication adherence, which is a complex, difficult and dynamic behaviour to adopt and to sustain over time. Because of its novelty, the impact of cyberhealth on drug intake has not yet been well explored. Initial results have provided some evidence, but more research is needed to determine the impact of cyberhealth resources on long-term adherence and health outcomes, its user-friendliness and its adequacy in meeting e-patient needs. The purpose of such Internet-based interventions, which provide different levels of customisation, is not to take over the roles of healthcare providers; on the contrary, cyberhealth platforms should reinforce the alliance between healthcare providers and patients by filling time-gaps between visits and allowing patients to upload and/or share feedback material to be used during the visits. This shift, however, is not easily endorsed by healthcare providers, who must master new eHealth skills, but healthcare systems have a unique opportunity to invest in the Internet and to use this powerful tool to design the future of integrated care. Before this can occur, however, important issues must be addressed and resolved, for example ethical considerations, the scientific quality of programmes, reimbursement of activity, data security and the ownership of uploaded data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many assays to evaluate the nature, breadth, and quality of antigen-specific T cell responses are currently applied in human medicine. In most cases, assay-related protocols are developed on an individual laboratory basis, resulting in a large number of different protocols being applied worldwide. Together with the inherent complexity of cellular assays, this leads to unnecessary limitations in the ability to compare results generated across institutions. Over the past few years a number of critical assay parameters have been identified which influence test performance irrespective of protocol, material, and reagents used. Describing these critical factors as an integral part of any published report will both facilitate the comparison of data generated across institutions and lead to improvements in the assays themselves. To this end, the Minimal Information About T Cell Assays (MIATA) project was initiated. The objective of MIATA is to achieve a broad consensus on which T cell assay parameters should be reported in scientific publications and to propose a mechanism for reporting these in a systematic manner. To add maximum value for the scientific community, a step-wise, open, and field-spanning approach has been taken to achieve technical precision, user-friendliness, adequate incorporation of concerns, and high acceptance among peers. Here, we describe the past, present, and future perspectives of the MIATA project. We suggest that the approach taken can be generically applied to projects in which a broad consensus has to be reached among scientists working in fragmented fields, such as immunology. An additional objective of this undertaking is to engage the broader scientific community to comment on MIATA and to become an active participant in the project.