200 resultados para cluster algorithms
Resumo:
Background: Potentially inappropriate prescribing (PIP) in older people is common in primary care and can result in increased morbidity, adverse drug events, hospitalizations and mortality. The prevalence of PIP in Ireland is estimated at 36% with an associated expenditure of over [euro sign]45 million in 2007. The aim of this paper is to describe the application of the Medical Research Council (MRC) framework to the development of an intervention to decrease PIP in Irish primary care.
Methods: The MRC framework for the design and evaluation of complex interventions guided the development of the study intervention. In the development stage, literature was reviewed and combined with information obtained from experts in the field using a consensus based methodology and patient cases to define the main components of the intervention. In the pilot stage, five GPs tested the proposed intervention. Qualitative interviews were conducted with the GPs to inform the development and implementation of the intervention for the main randomised controlled trial.
Results: The literature review identified PIP criteria for inclusion in the study and two initial intervention components - academic detailing and medicines review supported by therapeutic treatment algorithms. Through patient case studies and a focus group with a group of 8 GPs, these components were refined and a third component of the intervention identified - patient information leaflets. The intervention was tested in a pilot study. In total, eight medicine reviews were conducted across five GP practices. These reviews addressed ten instances of PIP, nine of which were addressed in the form of either a dose reduction or a discontinuation of a targeted medication. Qualitative interviews highlighted that GPs were receptive to the intervention but patient preference and time needed both to prepare for and conduct the medicines review, emerged as potential barriers. Findings from the pilot study allowed further refinement to produce the finalised intervention of academic detailing with a pharmacist, medicines review with web-based therapeutic treatment algorithms and tailored patient information leaflets.
Conclusions: The MRC framework was used in the development of the OPTI-SCRIPT intervention to decrease the level of PIP in primary care in Ireland. Its application ensured that the intervention was developed using the best available evidence, was acceptable to GPs and feasible to deliver in the clinical setting. The effectiveness of this intervention is currently being tested in a pragmatic cluster randomised controlled trial.
Trial registration: Current controlled trials ISRCTN41694007.© 2013 Clyne et al.; licensee BioMed Central Ltd.
Resumo:
We report on our discovery and observations of the Pan-STARRS1 supernova (SN) PS1-12sk, a transient with properties that indicate atypical star formation in its host galaxy cluster or pose a challenge to popular progenitor system models for this class of explosion. The optical spectra of PS1-12sk classify it as a Type Ibn SN (c.f. SN 2006jc), dominated by intermediate-width (3x10^3 km/s) and time variable He I emission. Our multi-wavelength monitoring establishes the rise time dt = 9-23 days and shows an NUV-NIR SED with temperature > 17x10^3 K and a peak rise magnitude of Mz = -18.9 mag. SN Ibn spectroscopic properties are commonly interpreted as the signature of a massive star (17 - 100 M_sun) explosion within a He-enriched circumstellar medium. However, unlike previous Type Ibn supernovae, PS1-12sk is associated with an elliptical brightest cluster galaxy, CGCG 208-042 (z = 0.054) in cluster RXC J0844.9+4258. The expected probability of an event like PS1-12sk in such environments is low given the measured infrequency of core-collapse SNe in red sequence galaxies compounded by the low volumetric rate of SN Ibn. Furthermore, we find no evidence of star formation at the explosion site to sensitive limits (Sigma Halpha
Resumo:
Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.
Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.
Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.
Resumo:
Many modern networks are \emph{reconfigurable}, in the sense that the topology of the network can be changed by the nodes in the network. For example, peer-to-peer, wireless and ad-hoc networks are reconfigurable. More generally, many social networks, such as a company's organizational chart; infrastructure networks, such as an airline's transportation network; and biological networks, such as the human brain, are also reconfigurable. Modern reconfigurable networks have a complexity unprecedented in the history of engineering, resembling more a dynamic and evolving living animal rather than a structure of steel designed from a blueprint. Unfortunately, our mathematical and algorithmic tools have not yet developed enough to handle this complexity and fully exploit the flexibility of these networks. We believe that it is no longer possible to build networks that are scalable and never have node failures. Instead, these networks should be able to admit small, and maybe, periodic failures and still recover like skin heals from a cut. This process, where the network can recover itself by maintaining key invariants in response to attack by a powerful adversary is what we call \emph{self-healing}. Here, we present several fast and provably good distributed algorithms for self-healing in reconfigurable dynamic networks. Each of these algorithms have different properties, a different set of gaurantees and limitations. We also discuss future directions and theoretical questions we would like to answer. %in the final dissertation that this document is proposed to lead to.
Resumo:
Despite the significant burden of cervical cancer, Malaysia like many middle-income countries relies on opportunistic cervical screening as opposed to a more organized population-based program. The aim of this study was to ascertain the effectiveness of a worksite screening initiative upon Papanicolaou smear test (Pap test) uptake among educated working women in Malaysia.