993 resultados para Controller implementation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Risk assessment with a thorough family health history is recommended by numerous organizations and is now a required component of the annual physical for Medicare beneficiaries under the Affordable Care Act. However, there are several barriers to incorporating robust risk assessments into routine care. MeTree, a web-based patient-facing health risk assessment tool, was developed with the aim of overcoming these barriers. In order to better understand what factors will be instrumental for broader adoption of risk assessment programs like MeTree in clinical settings, we obtained funding to perform a type III hybrid implementation-effectiveness study in primary care clinics at five diverse healthcare systems. Here, we describe the study's protocol. METHODS/DESIGN: MeTree collects personal medical information and a three-generation family health history from patients on 98 conditions. Using algorithms built entirely from current clinical guidelines, it provides clinical decision support to providers and patients on 30 conditions. All adult patients with an upcoming well-visit appointment at one of the 20 intervention clinics are eligible to participate. Patient-oriented risk reports are provided in real time. Provider-oriented risk reports are uploaded to the electronic medical record for review at the time of the appointment. Implementation outcomes are enrollment rate of clinics, providers, and patients (enrolled vs approached) and their representativeness compared to the underlying population. Primary effectiveness outcomes are the percent of participants newly identified as being at increased risk for one of the clinical decision support conditions and the percent with appropriate risk-based screening. Secondary outcomes include percent change in those meeting goals for a healthy lifestyle (diet, exercise, and smoking). Outcomes are measured through electronic medical record data abstraction, patient surveys, and surveys/qualitative interviews of clinical staff. DISCUSSION: This study evaluates factors that are critical to successful implementation of a web-based risk assessment tool into routine clinical care in a variety of healthcare settings. The result will identify resource needs and potential barriers and solutions to implementation in each setting as well as an understanding potential effectiveness. TRIAL REGISTRATION: NCT01956773.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Patients, clinicians, researchers and payers are seeking to understand the value of using genomic information (as reflected by genotyping, sequencing, family history or other data) to inform clinical decision-making. However, challenges exist to widespread clinical implementation of genomic medicine, a prerequisite for developing evidence of its real-world utility. METHODS: To address these challenges, the National Institutes of Health-funded IGNITE (Implementing GeNomics In pracTicE; www.ignite-genomics.org ) Network, comprised of six projects and a coordinating center, was established in 2013 to support the development, investigation and dissemination of genomic medicine practice models that seamlessly integrate genomic data into the electronic health record and that deploy tools for point of care decision making. IGNITE site projects are aligned in their purpose of testing these models, but individual projects vary in scope and design, including exploring genetic markers for disease risk prediction and prevention, developing tools for using family history data, incorporating pharmacogenomic data into clinical care, refining disease diagnosis using sequence-based mutation discovery, and creating novel educational approaches. RESULTS: This paper describes the IGNITE Network and member projects, including network structure, collaborative initiatives, clinical decision support strategies, methods for return of genomic test results, and educational initiatives for patients and providers. Clinical and outcomes data from individual sites and network-wide projects are anticipated to begin being published over the next few years. CONCLUSIONS: The IGNITE Network is an innovative series of projects and pilot demonstrations aiming to enhance translation of validated actionable genomic information into clinical settings and develop and use measures of outcome in response to genome-based clinical interventions using a pragmatic framework to provide early data and proofs of concept on the utility of these interventions. Through these efforts and collaboration with other stakeholders, IGNITE is poised to have a significant impact on the acceleration of genomic information into medical practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective Describe the methodology and selection of quality indicators (QI) to be implemented in the EFFECT (EFFectiveness of Endometrial Cancer Treatment) project. EFFECT aims to monitor the variability in Quality of Care (QoC) of uterine cancer in Belgium, to compare the effectiveness of different treatment strategies to improve the QoC and to check the internal validity of the QI to validate the impact of process indicators on outcome. Methods A QI list was retrieved from literature, recent guidelines and QI databases. The Belgian Healthcare Knowledge Center methodology was used for the selection process and involved an expert's panel rating the QI on 4 criteria. The resulting scores and further discussion resulted in a final QI list. An online EFFECT module was developed by the Belgian Cancer Registry including the list of variables required for measuring the QI. Three test phases were performed to evaluate the relevance, feasibility and understanding of the variables and to test the compatibility of the dataset. Results 138 QI were considered for further discussion and 82 QI were eligible for rating. Based on the rating scores and consensus among the expert's panel, 41 QI were considered measurable and relevant. Testing of the data collection enabled optimization of the content and the user-friendliness of the dataset and online module. Conclusions This first Belgian initiative for monitoring the QoC of uterine cancer indicates that the previously used QI selection methodology is reproducible for uterine cancer. The QI list could be applied by other research groups for comparison. © 2013 Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier–Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright © 2000 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtual manufacturing and design assessment increasingly involve the simulation of interacting phenomena, sic. multi-physics, an activity which is very computationally intensive. This chapter describes an attempt to address the parallel issues associated with a multi-physics simulation approach based upon a range of compatible procedures operating on one mesh using a single database - the distinct physics solvers can operate separately or coupled on sub-domains of the whole geometric space. Moreover, the finite volume unstructured mesh solvers use different discretization schemes (and, particularly, different ‘nodal’ locations and control volumes). A two-level approach to the parallelization of this simulation software is described: the code is restructured into parallel form on the basis of the mesh partitioning alone, that is, without regard to the physics. However, at run time, the mesh is partitioned to achieve a load balance, by considering the load per node/element across the whole domain. The latter of course is determined by the problem specific physics at a particular location.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simulation program has been developed to calculate the power-spectral density of thin avalanche photodiodes, which are used in optical networks. The program extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. We describe our experiences in parallelizing the code using both MPI and OpenMP. Several array partitioning schemes and scheduling policies are implemented and tested Our results show that the OpenMP code is scalable up to 64 processors on an SGI Origin 2000 machine and has small average errors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents work towards generic policy toolkit support for autonomic computing systems in which the policies themselves can be adapted dynamically and automatically. The work is motivated by three needs: the need for longer-term policy-based adaptation where the policy itself is dynamically adapted to continually maintain or improve its effectiveness despite changing environmental conditions; the need to enable non autonomics-expert practitioners to embed self-managing behaviours with low cost and risk; and the need for adaptive policy mechanisms that are easy to deploy into legacy code. A policy definition language is presented; designed to permit powerful expression of self-managing behaviours. The language is very flexible through the use of simple yet expressive syntax and semantics, and facilitates a very diverse policy behaviour space through both hierarchical and recursive uses of language elements. A prototype library implementation of the policy support mechanisms is described. The library reads and writes policies in well-formed XML script. The implementation extends the state of the art in policy-based autonomics through innovations which include support for multiple policy versions of a given policy type, multiple configuration templates, and meta-policies to dynamically select between policy instances and templates. Most significantly, the scheme supports hot-swapping between policy instances. To illustrate the feasibility and generalised applicability of these tools, two dissimilar example deployment scenarios are examined. The first is taken from an exploratory implementation of self-managing parallel processing, and is used to demonstrate the simple and efficient use of the tools. The second example demonstrates more-advanced functionality, in the context of an envisioned multi-policy stock trading scheme which is sensitive to environmental volatility

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using .NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer egress simulation has potential to be used in large scale incidents to provide live advice to incident commanders. While there are many considerations which must be taken into account when applying such models to live incidents, one of the first concerns the computational speed of simulations. No matter how important the insight provided by the simulation, numerical hindsight will not prove useful to an incident commander. Thus for this type of application to be useful, it is essential that the simulation can be run many times faster than real time. Parallel processing is a method of reducing run times for very large computational simulations by distributing the workload amongst a number of CPUs. In this paper we examine the development of a parallel version of the buildingEXODUS software. The parallel strategy implemented is based on a systematic partitioning of the problem domain onto an arbitrary number of sub-domains. Each sub-domain is computed on a separate processor and runs its own copy of the EXODUS code. The software has been designed to work on typical office based networked PCs but will also function on a Windows based cluster. Two evaluation scenarios using the parallel implementation of EXODUS are described; a large open area and a 50 story high-rise building scenario. Speed-ups of up to 3.7 are achieved using up to six computers, with high-rise building evacuation simulation achieving run times of 6.4 times faster than real time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Within the building evacuation context, wayfinding describes the process in which an individual located within an arbitrarily complex enclosure attempts to find a path which leads them to relative safety, usually the exterior of the enclosure. Within most evacuation modelling tools, wayfinding is completely ignored; agents are either assigned the shortest distance path or use a potential field to find the shortest path to the exits. In this paper a novel wayfinding technique that attempts to represent the manner in which people wayfind within structures is introduced and demonstrated through two examples. The first step is to encode the spatial information of the enclosure in terms of a graph. The second step is to apply search algorithms to the graph to find possible routes to the destination and assign a cost to the routes based on their personal route preferences such as "least time" or "least distance" or a combination of criteria. The third step is the route execution and refinement. In this step, the agent moves along the chosen route and reassesses the route at regular intervals and may decide to take an alternative path if the agent determines that an alternate route is more favourable e.g. initial path is highly congested or is blocked due to fire.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An aqueous solution of sucrose was lyophilised, producing amorphous sucrose. This wasthen stored under different humidity at 25ºC for 1 week, allowing some samples tocrystallise. FT-Raman spectroscopy and PXRD have been successfully shown toqualitatively distinguish between amorphous and crystalline samples of sucrose. The datafrom the two techniques is complementary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today, the key to commercial success in manufacturing is the timely development of new products that are not only functionally fit for purpose but offer high performance and quality throughout their entire lifecycle. In principle, this demands the introduction of a fully developed and optimised product from the outset. To accomplish this, manufacturing companies must leverage existing knowledge in their current technical, manufacturing and service capabilities. This is especially true in the field of tolerance selection and application, the subject area of this research. Tolerance knowledge must be readily available and deployed as an integral part of the product development process. This paper describes a methodology and framework,currently under development in a UK manufacturer, to achieve this objective.