939 resultados para Implementation models
Resumo:
BACKGROUND: Enhanced recovery after surgery (ERAS) is a multimodal approach to perioperative care that combines a range of interventions to enable early mobilization and feeding after surgery. We investigated the feasibility, clinical effectiveness, and cost savings of an ERAS program at a major U. S. teaching hospital. METHODS: Data were collected from consecutive patients undergoing open or laparoscopic colorectal surgery during 2 time periods, before and after implementation of an ERAS protocol. Data collected included patient demographics, operative, and perioperative surgical and anesthesia data, need for analgesics, complications, inpatient medical costs, and 30-day readmission rates. RESULTS: There were 99 patients in the traditional care group, and 142 in the ERAS group. The median length of stay (LOS) was 5 days in the ERAS group compared with 7 days in the traditional group (P < 0.001). The reduction in LOS was significant for both open procedures (median 6 vs 7 days, P = 0.01), and laparoscopic procedures (4 vs 6 days, P < 0.0001). ERAS patients had fewer urinary tract infections (13% vs 24%, P = 0.03). Readmission rates were lower in ERAS patients (9.8% vs 20.2%, P = 0.02). DISCUSSION: Implementation of an enhanced recovery protocol for colorectal surgery at a tertiary medical center was associated with a significantly reduced LOS and incidence of urinary tract infection. This is consistent with that of other studies in the literature and suggests that enhanced recovery programs could be implemented successfully and should be considered in U.S. hospitals.
Resumo:
Gaussian factor models have proven widely useful for parsimoniously characterizing dependence in multivariate data. There is a rich literature on their extension to mixed categorical and continuous variables, using latent Gaussian variables or through generalized latent trait models acommodating measurements in the exponential family. However, when generalizing to non-Gaussian measured variables the latent variables typically influence both the dependence structure and the form of the marginal distributions, complicating interpretation and introducing artifacts. To address this problem we propose a novel class of Bayesian Gaussian copula factor models which decouple the latent factors from the marginal distributions. A semiparametric specification for the marginals based on the extended rank likelihood yields straightforward implementation and substantial computational gains. We provide new theoretical and empirical justifications for using this likelihood in Bayesian inference. We propose new default priors for the factor loadings and develop efficient parameter-expanded Gibbs sampling for posterior computation. The methods are evaluated through simulations and applied to a dataset in political science. The models in this paper are implemented in the R package bfa.
Resumo:
BACKGROUND: Patients, clinicians, researchers and payers are seeking to understand the value of using genomic information (as reflected by genotyping, sequencing, family history or other data) to inform clinical decision-making. However, challenges exist to widespread clinical implementation of genomic medicine, a prerequisite for developing evidence of its real-world utility. METHODS: To address these challenges, the National Institutes of Health-funded IGNITE (Implementing GeNomics In pracTicE; www.ignite-genomics.org ) Network, comprised of six projects and a coordinating center, was established in 2013 to support the development, investigation and dissemination of genomic medicine practice models that seamlessly integrate genomic data into the electronic health record and that deploy tools for point of care decision making. IGNITE site projects are aligned in their purpose of testing these models, but individual projects vary in scope and design, including exploring genetic markers for disease risk prediction and prevention, developing tools for using family history data, incorporating pharmacogenomic data into clinical care, refining disease diagnosis using sequence-based mutation discovery, and creating novel educational approaches. RESULTS: This paper describes the IGNITE Network and member projects, including network structure, collaborative initiatives, clinical decision support strategies, methods for return of genomic test results, and educational initiatives for patients and providers. Clinical and outcomes data from individual sites and network-wide projects are anticipated to begin being published over the next few years. CONCLUSIONS: The IGNITE Network is an innovative series of projects and pilot demonstrations aiming to enhance translation of validated actionable genomic information into clinical settings and develop and use measures of outcome in response to genome-based clinical interventions using a pragmatic framework to provide early data and proofs of concept on the utility of these interventions. Through these efforts and collaboration with other stakeholders, IGNITE is poised to have a significant impact on the acceleration of genomic information into medical practice.
Resumo:
Computer based mathematical models describing aircraft fire have a role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria and in post mortuum accident investigation. As the cost involved in performing large-scale fire experiments for the next generation 'Ultra High Capacity Aircraft' (UHCA) are expected to be prohibitively high, the development and use of these modelling tools may become essential if these aircraft are to prove a safe and viable reality. By describing the present capabilities and limitations of aircraft fire models, this paper will examine the future development of these models in the areas of large scale applications through parallel computing, combustion modelling and extinguishment modelling.
Resumo:
Computer egress simulation has potential to be used in large scale incidents to provide live advice to incident commanders. While there are many considerations which must be taken into account when applying such models to live incidents, one of the first concerns the computational speed of simulations. No matter how important the insight provided by the simulation, numerical hindsight will not prove useful to an incident commander. Thus for this type of application to be useful, it is essential that the simulation can be run many times faster than real time. Parallel processing is a method of reducing run times for very large computational simulations by distributing the workload amongst a number of CPUs. In this paper we examine the development of a parallel version of the buildingEXODUS software. The parallel strategy implemented is based on a systematic partitioning of the problem domain onto an arbitrary number of sub-domains. Each sub-domain is computed on a separate processor and runs its own copy of the EXODUS code. The software has been designed to work on typical office based networked PCs but will also function on a Windows based cluster. Two evaluation scenarios using the parallel implementation of EXODUS are described; a large open area and a 50 story high-rise building scenario. Speed-ups of up to 3.7 are achieved using up to six computers, with high-rise building evacuation simulation achieving run times of 6.4 times faster than real time.
Resumo:
A scalable large vocabulary, speaker independent speech recognition system is being developed using Hidden Markov Models (HMMs) for acoustic modeling and a Weighted Finite State Transducer (WFST) to compile sentence, word, and phoneme models. The system comprises a software backend search and an FPGA-based Gaussian calculation which are covered here. In this paper, we present an efficient pipelined design implemented both as an embedded peripheral and as a scalable, parallel hardware accelerator. Both architectures have been implemented on an Alpha Data XRC-5T1, reconfigurable computer housing a Virtex 5 SX95T FPGA. The core has been tested and is capable of calculating a full set of Gaussian results from 3825 acoustic models in 9.03 ms which coupled with a backend search of 5000 words has provided an accuracy of over 80%. Parallel implementations have been designed with up to 32 cores and have been successfully implemented with a clock frequency of 133?MHz.
Resumo:
The prevalence of multicore processors is bound to drive most kinds of software development towards parallel programming. To limit the difficulty and overhead of parallel software design and maintenance, it is crucial that parallel programming models allow an easy-to-understand, concise and dense representation of parallelism. Parallel programming models such as Cilk++ and Intel TBBs attempt to offer a better, higher-level abstraction for parallel programming than threads and locking synchronization. It is not straightforward, however, to express all patterns of parallelism in these models. Pipelines are an important parallel construct, although difficult to express in Cilk and TBBs in a straightfor- ward way, not without a verbose restructuring of the code. In this paper we demonstrate that pipeline parallelism can be easily and concisely expressed in a Cilk-like language, which we extend with input, output and input/output dependency types on procedure arguments, enforced at runtime by the scheduler. We evaluate our implementation on real applications and show that our Cilk-like scheduler, extended to track and enforce these dependencies has performance comparable to Cilk++.
Resumo:
There is a requirement for better integration between design and analysis tools, which is difficult due to their different objectives, separate data representations and workflows. Currently, substantial effort is required to produce a suitable analysis model from design geometry. Robust links are required between these different representations to enable analysis attributes to be transferred between different design and analysis packages for models at various levels of fidelity.
This paper describes a novel approach for integrating design and analysis models by identifying and managing the relationships between the different representations. Three key technologies, Cellular Modeling, Virtual Topology and Equivalencing, have been employed to achieve effective simulation model management. These technologies and their implementation are discussed in detail. Prototype automated tools are introduced demonstrating how multiple simulation models can be linked and maintained to facilitate seamless integration throughout the design cycle.
Resumo:
Different classes of constitutive models have been proposed to capture the time-dependent behaviour of soft soil (creep, stress relaxation, rate dependency). This paper critically reviews many of the models developed based on understanding of the time dependent stress-strain-stress rate-strain rate behaviour of soils and viscoplasticity in terms of their strengths and weaknesses. Some discussion is also made on the numerical implementation aspects of these models. Typical findings from numerical analyses of geotechnical structures constructed on soft soils are also discussed. The general elastic viscoplastic (EVP) models can roughly be divided into two categories: models based on the concept of overstress and models based on non-stationary flow surface theory. Although general in structure, both categories have their own strengths and shortcomings. This review indicates that EVP analysis is yet to be vastly used by the geotechnical engineers, apparently due to the mathematical complication involved in the formulation of the constitutive models, unconvincing benefit in terms of the accuracy of performance prediction, requirement of additional soil parameter(s), difficulties in determining them, and the necessity of excessive computing resources and time. © 2013 Taylor & Francis.
Resumo:
This paper introduces hybrid address spaces as a fundamental design methodology for implementing scalable runtime systems on many-core architectures without hardware support for cache coherence. We use hybrid address spaces for an implementation of MapReduce, a programming model for large-scale data processing, and the implementation of a remote memory access (RMA) model. Both implementations are available on the Intel SCC and are portable to similar architectures. We present the design and implementation of HyMR, a MapReduce runtime system whereby different stages and the synchronization operations between them alternate between a distributed memory address space and a shared memory address space, to improve performance and scalability. We compare HyMR to a reference implementation and we find that HyMR improves performance by a factor of 1.71× over a set of representative MapReduce benchmarks. We also compare HyMR with Phoenix++, a state-of-art implementation for systems with hardware-managed cache coherence in terms of scalability and sustained to peak data processing bandwidth, where HyMR demon- strates improvements of a factor of 3.1× and 3.2× respectively. We further evaluate our hybrid remote memory access (HyRMA) programming model and assess its performance to be superior of that of message passing.
Resumo:
We present a Monte Carlo radiative transfer technique for calculating synthetic spectropolarimetry for multidimensional supernova explosion models. The approach utilizes 'virtual-packets' that are generated during the propagation of the Monte Carlo quanta and used to compute synthetic observables for specific observer orientations. Compared to extracting synthetic observables by direct binning of emergent Monte Carlo quanta, this virtual-packet approach leads to a substantial reduction in the Monte Carlo noise. This is not only vital for calculating synthetic spectropolarimetry (since the degree of polarization is typically very small) but also useful for calculations of light curves and spectra. We first validate our approach via application of an idealized test code to simple geometries. We then describe its implementation in the Monte Carlo radiative transfer code ARTIS and present test calculations for simple models for Type Ia supernovae. Specifically, we use the well-known one-dimensional W7 model to verify that our scheme can accurately recover zero polarization from a spherical model, and to demonstrate the reduction in Monte Carlo noise compared to a simple packet-binning approach. To investigate the impact of aspherical ejecta on the polarization spectra, we then use ARTIS to calculate synthetic observables for prolate and oblate ellipsoidal models with Type Ia supernova compositions.
Resumo:
Background
Clinically integrated teaching and learning are regarded as the best options for improving evidence-based healthcare (EBHC) knowledge, skills and attitudes. To inform implementation of such strategies, we assessed experiences and opinions on lessons learnt of those involved in such programmes.
Methods and Findings
We conducted semi-structured interviews with 24 EBHC programme coordinators from around the world, selected through purposive sampling. Following data transcription, a multidisciplinary group of investigators carried out analysis and data interpretation, using thematic content analysis. Successful implementation of clinically integrated teaching and learning of EBHC takes much time. Student learning needs to start in pre-clinical years with consolidation, application and assessment following in clinical years. Learning is supported through partnerships between various types of staff including the core EBHC team, clinical lecturers and clinicians working in the clinical setting. While full integration of EBHC learning into all clinical rotations is considered necessary, this was not always achieved. Critical success factors were pragmatism and readiness to use opportunities for engagement and including EBHC learning in the curriculum; patience; and a critical mass of the right teachers who have EBHC knowledge and skills and are confident in facilitating learning. Role modelling of EBHC within the clinical setting emerged as an important facilitator. The institutional context exerts an important influence; with faculty buy-in, endorsement by institutional leaders, and an EBHC-friendly culture, together with a supportive community of practice, all acting as key enablers. The most common challenges identified were lack of teaching time within the clinical curriculum, misconceptions about EBHC, resistance of staff, lack of confidence of tutors, lack of time, and negative role modelling.
Conclusions
Implementing clinically integrated EBHC curricula requires institutional support, a critical mass of the right teachers and role models in the clinical setting combined with patience, persistence and pragmatism on the part of teachers.
Resumo:
There is a general consensus that new service delivery models are needed for children with developmental coordination disorder (DCD). Emerging principles to guide service delivery include the use of graduated levels of intensity and evidence-based services that focus on function and participation. Interdisciplinary, community-based service delivery models based on best practice principles are needed. In this case report, we propose the Apollo model as an example of an innovative service delivery model for children with DCD. We describe the context that led to the creation of a program for children with DCD, describe the service delivery model and services, and share lessons learned through implementation. The Apollo model has 5 components: first contact, service delivery coordination, community-, group- and individual-interventions. This model guided the development of a streamlined set of services offered to children with DCD, including early-intake to share educational information with families, community interventions, inter-disciplinary and occupational therapy groups and individual interventions. Following implementation of the Apollo model, waiting times decreased and numbers of children receiving services increased, without compromising service quality. Lessons learned are shared to facilitate development of other practice models to support children with DCD.
Resumo:
The potential of cloud computing is gaining significant interest in Modeling & Simulation (M&S). The underlying concept of using computing power as a utility is very attractive to users that can access state-of-the-art hardware and software without capital investment. Moreover, the cloud computing characteristics of rapid elasticity and the ability to scale up or down according to workload make it very attractive to numerous applications including M&S. Research and development work typically focuses on the implementation of cloud-based systems supporting M&S as a Service (MSaaS). Such systems are typically composed of a supply chain of technology services. How is the payment collected from the end-user and distributed to the stakeholders in the supply chain? We discuss the business aspects of developing a cloud platform for various M&S applications. Business models from the perspectives of the stakeholders involved in providing and using MSaaS and cloud computing are investigated and presented.
Resumo:
The design and development of simulation models and tools for Demand Response (DR) programs are becoming more and more important for adequately taking the maximum advantages of DR programs use. Moreover, a more active consumers’ participation in DR programs can help improving the system reliability and decrease or defer the required investments. DemSi, a DR simulator, designed and implemented by the authors of this paper, allows studying DR actions and schemes in distribution networks. It undertakes the technical validation of the solution using realistic network simulation based on PSCAD. DemSi considers the players involved in DR actions, and the results can be analyzed from each specific player point of view.