823 resultados para blended workflow
Resumo:
Objective Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular and for improving healthcare quality and patient safety in general. Method The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. Results The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. Conclusions Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.
Resumo:
OBJECTIVE: Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper, a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular as well as improving healthcare quality and patient safety in general. METHOD: The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. RESULTS: The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. CONCLUSIONS: Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.
Resumo:
Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client’s site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.
Resumo:
BACKGROUND: The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems. Purpose The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems. METHODS: The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features. RESULTS: Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the user's workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert. Discussion The principal finding relates to system-by-system variability. The best system in our analysis had only a single missing feature (from 42 total) while the worst had eighteen.This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern. CONCLUSIONS: These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.
Resumo:
BACKGROUND: Early detection of colorectal cancer through timely follow-up of positive Fecal Occult Blood Tests (FOBTs) remains a challenge. In our previous work, we found 40% of positive FOBT results eligible for colonoscopy had no documented response by a treating clinician at two weeks despite procedures for electronic result notification. We determined if technical and/or workflow-related aspects of automated communication in the electronic health record could lead to the lack of response. METHODS: Using both qualitative and quantitative methods, we evaluated positive FOBT communication in the electronic health record of a large, urban facility between May 2008 and March 2009. We identified the source of test result communication breakdown, and developed an intervention to fix the problem. Explicit medical record reviews measured timely follow-up (defined as response within 30 days of positive FOBT) pre- and post-intervention. RESULTS: Data from 11 interviews and tracking information from 490 FOBT alerts revealed that the software intended to alert primary care practitioners (PCPs) of positive FOBT results was not configured correctly and over a third of positive FOBTs were not transmitted to PCPs. Upon correction of the technical problem, lack of timely follow-up decreased immediately from 29.9% to 5.4% (p<0.01) and was sustained at month 4 following the intervention. CONCLUSION: Electronic communication of positive FOBT results should be monitored to avoid limiting colorectal cancer screening benefits. Robust quality assurance and oversight systems are needed to achieve this. Our methods may be useful for others seeking to improve follow-up of FOBTs in their systems.
Resumo:
Source materials like fine art, over-sized, fragile maps, and delicate artifacts have traditionally been digitally converted through the use of controlled lighting and high resolution scanners and camera backs. In addition the capture of items such as general and special collections bound monographs has recently grown both through consortial efforts like the Internet Archive's Open Content Alliance and locally at the individual institution level. These projects, in turn, have introduced increasingly higher resolution consumer-grade digital single lens reflex cameras or "DSLRs" as a significant part of the general cultural heritage digital conversion workflow. Central to the authors' discussion is the fact that both camera backs and DSLRs commonly share the ability to capture native raw file formats. Because these formats include such advantages as access to an image's raw mosaic sensor data within their architecture, many institutions choose raw for initial capture due to its high bit-level and unprocessed nature. However to date these same raw formats, so important to many at the point of capture, have yet to be considered "archival" within most published still imaging standards, if they are considered at all. Throughout many workflows raw files are deleted and thrown away after more traditionally "archival" uncompressed TIFF or JPEG 2000 files have been derived downstream from their raw source formats [1][2]. As a result, the authors examine the nature of raw anew and consider the basic questions, Should raw files be retained? What might their role be? Might they in fact form a new archival format space? Included in the discussion is a survey of assorted raw file types and their attributes. Also addressed are various sustainability issues as they pertain to archival formats with a special emphasis on both raw's positive and negative characteristics as they apply to archival practices. Current common archival workflows versus possible raw-based ones are investigated as well. These comparisons are noted in the context of each approach's differing levels of usable captured image data, various preservation virtues, and the divergent ideas of strictly fixed renditions versus the potential for improved renditions over time. Special attention is given to the DNG raw format through a detailed inspection of a number of its various structural components and the roles that they play in the format's latest specification. Finally an evaluation is drawn of both proprietary raw formats in general and DNG in particular as possible alternative archival formats for still imaging.
Resumo:
OBJECTIVE The Short Communication presents a clinical case in which a novel procedure--the "Individualized Scanbody Technique" (IST)--was applied, starting with an intraoral digital impression and using CAD/CAM process for fabrication of ceramic reconstructions in bone level implants. MATERIAL AND METHODS A standardized scanbody was individually modified in accordance with the created emergence profile of the provisional implant-supported restoration. Due to the specific adaptation of the scanbody, the conditioned supra-implant soft tissue complex was stabilized for the intraoral optical scan process. Then, the implant platform position and the supra-implant mucosa outline were transferred into the three-dimensional data set with a digital impression system. Within the technical workflow, the ZrO2 -implant-abutment substructure could be designed virtually with predictable margins of the supra-implant mucosa. RESULTS After finalization of the 1-piece screw-retained full ceramic implant crown, the restoration demonstrated an appealing treatment outcome with harmonious soft tissue architecture. CONCLUSIONS The IST facilitates a simple and fast approach for a supra-implant mucosal outline transfer in the digital workflow. Moreover, the IST closes the interfaces in the full digital pathway.
Resumo:
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Resumo:
High throughput discovery of ligand scaffolds for target proteins can accelerate development of leads and drug candidates enormously. Here we describe an innovative workflow for the discovery of high affinity ligands for the benzodiazepine-binding site on the so far not crystallized mammalian GABAA receptors. The procedure includes chemical biology techniques that may be generally applied to other proteins. Prerequisites are a ligand that can be chemically modified with cysteine-reactive groups, knowledge of amino acid residues contributing to the drug-binding pocket, and crystal structures either of proteins homologous to the target protein or, better, of the target itself. Part of the protocol is virtual screening that without additional rounds of optimization in many cases results only in low affinity ligands, even when a target protein has been crystallized. Here we show how the integration of functional data into structure-based screening dramatically improves the performance of the virtual screening. Thus, lead compounds with 14 different scaffolds were identified on the basis of an updated structural model of the diazepam-bound state of the GABAA receptor. Some of these compounds show considerable preference for the α3β2γ2 GABAA receptor subtype.
Resumo:
Answering run-time questions in object-oriented systems involves reasoning about and exploring connections between multiple objects. Developer questions exercise various aspects of an object and require multiple kinds of interactions depending on the relationships between objects, the application domain and the differing developer needs. Nevertheless, traditional object inspectors, the essential tools often used to reason about objects, favor a generic view that focuses on the low-level details of the state of individual objects. This leads to an inefficient effort, increasing the time spent in the inspector. To improve the inspection process, we propose the Moldable Inspector, a novel approach for an extensible object inspector. The Moldable Inspector allows developers to look at objects using multiple interchangeable presentations and supports a workflow in which multiple levels of connecting objects can be seen together. Both these aspects can be tailored to the domain of the objects and the question at hand. We further exemplify how the proposed solution improves the inspection process, introduce a prototype implementation and discuss new directions for extending the Moldable Inspector.
Resumo:
Hintergrund Fehler in der Pharmakotherapie sind häufig und betreffen wahrscheinlich etwa 2-10% aller Verschreibungen. Direkt nach Abschluss des Studiums beschreibt sich nur eine Minderheit der jungen Ärzte als „kompetent zu Verordnen“. Unser Ziel ist es, die Medizinstudierenden in Ergänzung zu den Vorlesungen optimal auf die klinische Tätigkeit des Medikamente Verordnens vorzubereiten. Bereits umgesetzt sind die beiden blended learning Module „Arzneimittelrezepte korrekt schreiben“ und „Polypharmazie im Alter“.
Resumo:
This paper describes a general workflow for the registration of terrestrial radar interferometric data with 3D point clouds derived from terrestrial photogrammetry and structure from motion. After the determination of intrinsic and extrinsic orientation parameters, data obtained by terrestrial radar interferometry were projected on point clouds and then on the initial photographs. Visualisation of slope deformation measurements on photographs provides an easily understandable and distributable information product, especially of inaccessible target areas such as steep rock walls or in rockfall run-out zones. The suitability and error propagation of the referencing steps and final visualisation of four approaches are compared: (a) the classic approach using a metric camera and stereo-image photogrammetry; (b) images acquired with a metric camera, automatically processed using structure from motion; (c) images acquired with a digital compact camera, processed with structure from motion; and (d) a markerless approach, using images acquired with a digital compact camera using structure from motion without artificial ground control points. The usability of the completely markerless approach for the visualisation of high-resolution radar interferometry assists the production of visualisation products for interpretation.
Resumo:
The microporous material Ionsiv is used for 137Cs removal from aqueous nuclear waste streams. In the UK, Cs-loaded Ionsiv is classed as an intermediate-level waste; no sentencing and disposal route is yet defined for this material and it is currently held in safe interim storage on several nuclear sites. In this study, the suitability of fly ash and blast furnace slag blended cements for encapsulation of Cs-Ionsiv in a monolithic wasteform was investigated. No evidence of reaction or dissolution of the Cs-Ionsiv in the cementitious environment was found by scanning electron microscopy and X-ray diffraction. However, a small fraction (<= 1.6 wt.%) of the Cs inventory was released from the encapsulated Ionsiv during leaching experiments carried out on hydrated samples. Furthermore, it was evident that K and Na present in the cementitious pore water exchanged with Cs and H in the Ionsiv. Therefore, cement systems lower in K and Na, such as slag based cements, showed lower Cs release than the fly ash based cements.
Resumo:
In 2011, the first consensus conference on guidelines for the use of cone-beam computed tomography (CBCT) was convened by the Swiss Society of Dentomaxillofacial Radiology (SGDMFR). This conference covered topics of oral and maxillofacial surgery, temporomandibular joint dysfunctions and disorders, and orthodontics. In 2014, a second consensus conference was convened on guidelines for the use of CBCT in endodontics, periodontology, reconstructive dentistry and pediatric dentistry. The guidelines are intended for all dentists in order to facilitate the decision as to when the use of CBCT is justified. As a rule, the use of CBCT is considered restrictive, since radiation protection reasons do not allow its routine use. CBCT should therefore be reserved for complex cases where its application can be expected to provide further information that is relevant to the choice of therapy. In periodontology, sufficient information is usually available from clinical examination and periapical radiographs; in endodontics alternative methods can often be used instead of CBCT; and for implant patients undergoing reconstructive dentistry, CT is of interest for the workflow from implant planning to the superstructure. For pediatric dentistry no application of CBCT is seen for caries diagnosis.
Resumo:
AIM Virtual patients (VPs) are a one-of-a-kind e-learning resource, fostering clinical reasoning skills through clinical case examples. The combination with face-to-face teaching is important for their successful integration, which is referred to as "blended learning". So far little is known about the use of VPs in the field of continuing medical education and residency training. The pilot study presented here inquired the application of VPs in the framework of a pediatric residency revision course. METHODS Around 200 participants of a pediatric nephology lecture ('nephrotic and nephritic syndrome in children') were offered two VPs as a wrap-up session at the revision course of the German Society for Pediatrics and Adolescent Medicine (DGKJ) 2009 in Heidelberg, Germany. Using a web-based survey form, different aspects were evaluated concerning the learning experiences with VPs, the combination with the lecture, and the use of VPs for residency training in general. RESULTS N=40 evaluable survey forms were returned (approximately 21%). The return rate was impaired by a technical problem with the local Wi-Fi firewall. The participants perceived the work-up of the VPs as a worthwhile learning experience, with proper preparation for diagnosing and treating real patients with similar complaints. Case presentations, interactivity, and locally and timely independent repetitive practices were, in particular, pointed out. On being asked about the use of VPs in general for residency training, there was a distinct demand for more such offers. CONCLUSION VPs may reasonably complement existing learning activities in residency training.