998 resultados para cluster validation
Resumo:
Background: Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods: Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion: At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Resumo:
Objective: To examine the properties of the Social Communication Questionnaire (SCQ) in a population cohort of children with autism spectrum disorders (ASDs) and in the general population, Method: SCQ data were collected from three samples: the Special Needs and Autism Project (SNAP) cohort of 9- to 10-year-old children with special educational needs with and without ASD and two similar but separate age groups of children from the general population (n = 411 and n = 247). Diagnostic assessments were completed on a stratified subsample (n = 255) of the special educational needs group. A sample-weighting procedure enabled us to estimate characteristics of the SCQ in the total ASD population. Diagnostic status of cases in the general population samples were extracted from child health records. Results: The SCQ showed strong discrimination between ASD and non-ASD cases (sensitivity 0.88, specificity 0.72) and between autism and nonautism cases (sensitivity 0.90, specificity 0.86). Findings were not affected by child IQ or parental education. In the general population samples between 4% and 5% of children scored above the ASD cutoff including 1.5% who scored above the autism cutoff. Although many of these high-scoring children had an ASD diagnosis, almost all (similar to 90%) of them had a diagnosed neurodevelopmental disorder. Conclusions: This study confirms the utility of the SCQ as a,first-level screen for ASD in at-risk samples of school-age children.
Resumo:
Typically, algorithms for generating stereo disparity maps have been developed to minimise the energy equation of a single image. This paper proposes a method for implementing cross validation in a belief propagation optimisation. When tested using the Middlebury online stereo evaluation, the cross validation improves upon the results of standard belief propagation. Furthermore, it has been shown that regions of homogeneous colour within the images can be used for enforcing the so-called "Segment Constraint". Developing from this, Segment Support is introduced to boost belief between pixels of the same image region and improve propagation into textureless regions.
Resumo:
Sub)picosecond transient absorption (TA) and time-resolved infrared (TRIR) spectra of the cluster [OS3(CO)(10-) (AcPy-MV)](2+) (the clication AcPy-MV = Acpy-MV2+ = [2-pyridylacetimine-N-(2-(1'-methyl-4,4'-bipyridine-1,1'-diium-1-yl) ethyl)] (PF6)(2)) (1(2+)) reveal that photoinduced electron transfer to the electron-accepting 4,4'-bipyridine-1,1'diium (MV2+) moiety competes with the fast relaxation of the initially populated sigmapi* excited state of the cluster to the ground state and/or cleavage of an Os-Os bond. The TA spectra of cluster 12 in acetone, obtained by irradiation into its lowest-energy absorption band, show the characteristic absorptions of the one-electron-reduced MV*(+) unit at 400 and 615 nm, in accordance with population of a charge-separated (CS) state in which a cluster-core electron has been transferred to the lowest pi* orbital of the remote MV2+ unit. This assignment is confirmed by picosecond TRIR spectra that show a large shift of the pilot highest-frequency nu(CO) band of 1(2+) by ca. +40 cm(-1), reflecting the photooxidation of the cluster core. The CS state is populated via fast (4.2 x 10(11) s(-1)) and efficient (88%) oxidative quenching of the optically populated sigmapi* excited state and decays biexponentially with lifetimes of 38 and 166 ps (1:2:1 ratio) with a complete regeneration of the parent cluster. About 12% of the cluster molecules in the sigmapi* excited state form long-lived open-core biradicals. In strongly coordinating acetonitrile, however, the cluster core-to-MV2+ electron transfer in cluster 12+ results in the irreversible formation of secondary photoproducts with a photooxidized cluster core. The photochemical behavior of the [Os-3(CO)(10)(alpha-diimine-MV)](2+) (donor-acceptor) dyad can be controlled by an externally applied electronic bias. Electrochemical one-electron reduction of the MV2+ moiety prior to the irradiation reduces its electron-accepting character to such an extent that the photoinduced electron transfer to MV*+ is no longer feasible. Instead, the irradiation of reduced cluster 1(.)+ results in the reversible formation of an open-core zwitterion, the ultimate photoproduct also observed upon irradiation of related nonsubstituted clusters [Os-3(CO)(10)(alpha-diimine)] in strongly coordinating solvents such as acetonitrile.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
A first step in interpreting the wide variation in trace gas concentrations measured over time at a given site is to classify the data according to the prevailing weather conditions. In order to classify measurements made during two intensive field campaigns at Mace Head, on the west coast of Ireland, an objective method of assigning data to different weather types has been developed. Air-mass back trajectories calculated using winds from ECMWF analyses, arriving at the site in 1995–1997, were allocated to clusters based on a statistical analysis of the latitude, longitude and pressure of the trajectory at 12 h intervals over 5 days. The robustness of the analysis was assessed by using an ensemble of back trajectories calculated for four points around Mace Head. Separate analyses were made for each of the 3 years, and for four 3-month periods. The use of these clusters in classifying ground-based ozone measurements at Mace Head is described, including the need to exclude data which have been influenced by local perturbations to the regional flow pattern, for example, by sea breezes. Even with a limited data set, based on 2 months of intensive field measurements in 1996 and 1997, there are statistically significant differences in ozone concentrations in air from the different clusters. The limitations of this type of analysis for classification and interpretation of ground-based chemistry measurements are discussed.
Resumo:
The overall operation and internal complexity of a particular production machinery can be depicted in terms of clusters of multidimensional points which describe the process states, the value in each point dimension representing a measured variable from the machinery. The paper describes a new cluster analysis technique for use with manufacturing processes, to illustrate how machine behaviour can be categorised and how regions of good and poor machine behaviour can be identified. The cluster algorithm presented is the novel mean-tracking algorithm, capable of locating N-dimensional clusters in a large data space in which a considerable amount of noise is present. Implementation of the algorithm on a real-world high-speed machinery application is described, with clusters being formed from machinery data to indicate machinery error regions and error-free regions. This analysis is seen to provide a promising step ahead in the field of multivariable control of manufacturing systems.
Resumo:
This study details validation of two separate multiplex STR systems for use in paternity investigations. These are the Second Generation Multiplex (SGM) developed by the UK Forensic Science Service and the PowerPlex 1 multiplex commercially available from Promega Inc. (Madison, WI, USA). These multiplexes contain 12 different STR systems (two are duplicated in the two systems). Population databases from Caucasian, Asian and Afro-Caribbean populations have been compiled for all loci. In all but two of the 36 STR/ethnic group combinations, no evidence was obtained to indicate inconsistency with Hardy-Weinberg (HW) proportions. Empirical and theoretical approaches have been taken to validate these systems for paternity testing. Samples from 121 cases of disputed paternity were analysed using established Single Locus Probe (SLP) tests currently in use, and also using the two multiplex STR systems. Results of all three test systems were compared and no non-conformities in the conclusions were observed, although four examples of apparent germ line mutations in the STR systems were identified. The data was analysed to give information on expected paternity indices and exclusion rates for these STR systems. The 12 systems combined comprise a highly discriminating test suitable for paternity testing. 99.96% of non-fathers are excluded from paternity on two or more STR systems. Where no exclusion is found, Paternity Index (PI) values of > 10,000 are expected in > 96% of cases.
Resumo:
Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.