852 resultados para Critical chain method
Resumo:
The practice of mixed-methods research has increased considerably over the last 10 years. While these studies have been criticized for violating quantitative and qualitative paradigmatic assumptions, the methodological quality of mixed-method studies has not been addressed. The purpose of this paper is to identify criteria to critically appraise the quality of mixed-method studies in the health literature. Criteria for critically appraising quantitative and qualitative studies were generated from a review of the literature. These criteria were organized according to a cross-paradigm framework. We recommend that these criteria be applied to a sample of mixed-method studies which are judged to be exemplary. With the consultation of critical appraisal experts and experienced qualitative, quantitative, and mixed-method researchers, further efforts are required to revise and prioritize the criteria according to importance.
Resumo:
his paper proposes an optimisation-based method to calculate the critical slip (speed) of dynamic stability and critical clearing time (CCT) of a self-excited induction generator (SEIG). A simple case study using the Matlab/Simulink environment has been included to exemplify the optimisation method. Relationships between terminal voltage, critical slip and reactance of transmission line, CCT and inertial constant have been determined, based on which analysis of impact on relaying setting has been further conducted for another simulation case.
Resumo:
Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias.
Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture.
Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria.
Setting: Critical care departments within NHS hospitals in the north-west of England.
Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation.
Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard.
Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy.
Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.
Resumo:
Currently there is no reliable objective method to quantify the setting properties of acrylic bone cements within an operating theatre environment. Ultrasonic technology can be used to determine the acoustic properties of the polymerising bone cement, which are linked to material properties and provide indications of the physical and chemical changes occurring within the cement. The focus of this study was the critical evaluation of pulse-echo ultrasonic test method in determining the setting and mechanical properties of three different acrylic bone cement when prepared under atmospheric and vacuum mixing conditions. Results indicated that the ultrasonic pulse-echo technique provided a highly reproducible and accurate method of monitoring the polymerisation reaction and indicating the principal setting parameters when compared to ISO 5833 standard, irrespective of the acrylic bone cement or mixing method used. However, applying the same test method to predict the final mechanical properties of acrylic bone cement did not prove a wholly accurate approach. Inhomogeneities within the cement microstructure and specimen geometry were found to have a significant influence on mechanical property predictions. Consideration of all the results suggests that the non-invasive and non-destructive pulse-echo ultrasonic test method is an effective and reliable method for following the full polymerisation reaction of acrylic bone cement in real-time and then determining the setting properties within a surgical theatre environment. However the application of similar technology for predicting the final mechanical properties of acrylic bone cement on a consistent basis may prove difficult.
Resumo:
This is a Self-study about my role as a teacher, driven by the question: "How do I improve my practice?" (Whitehead, 1989)? In this study, I explored the discomfort that I had with the way that I had been teaching. Specifically, I worked to uncover the reasons behind my obsessive (mis)management of my students. I wrote of how I came to give my Self permission for this critique: how I came to know that all knowledge is a construction, and that my practice, too, is a construction. I grounded this journey within my experiences. I constructed these experiences in narrative fomi in order to reach a greater understanding of how I came to be the teacher I initially was. I explored metaphors that impacted my practice, re-constructed them, and saw more clearly the assumptions and influences that have guided my teaching. I centred my inquiry into my teaching within an Action Reflection methodology, bon-owing Jack Whitehead's (1989) term to describe my version of Action Research. I relied upon the embedded cyclical pattern of Action Reflection to understand my teaching Self: beginning from a critical moment, reflecting upon it, and then taking appropriate action, and continuing in this way, working to improve my practice. To understand these critical moments, I developed a personal definition of critical literacy. I then tumed this definition inward. In treating my practice as a textual production, I applied critical literacy as a framework in coming to know and understand the construction that is my teaching. I grounded my thesis journey within my Self, positioning my study within my experiences of being a grade 1 teacher struggling to teach critical literacy. I then repositioned my journey to that of a grade 1 teacher struggling to use critical literacy to improve my practice. This journey, then, is about the transition from critical literacyit as-subject to critical literacy-as-instmctional-method in improving my practice. I joumeyed inwards, using a critical moment to build new understandings, leading me to the next critical moment, and continued in this cyclical way. I worked in this meandering yet deliberate way to reach a new place in my teaching: one that is more inclusive of all the voices in my room. I concluded my journey with a beginning: a beginning of re-visioning my practice. In telling the stories of my journey, of my teaching, of my experiences, I changed into the teacher that I am more comfortable with. I've come to the frightening conclusion that I am the decisive element in the classroom. It's my personal approach that creates the climate. It's my daily mood that makes the weather As a teacher, I possess a tremendous power to make a person's life miserable or joyous. I can be a tool of torture or an instrument of inspiration. I can humiliate or humour, hurt or heal. In all situations, it is my response that decides whether a crisis will be escalated or de-escalated and a person humanized or de-humanized. (Ginott, as cited in Buscaglia, 2002, p. 22)
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced
Resumo:
Many well-established statistical methods in genetics were developed in a climate of severe constraints on computational power. Recent advances in simulation methodology now bring modern, flexible statistical methods within the reach of scientists having access to a desktop workstation. We illustrate the potential advantages now available by considering the problem of assessing departures from Hardy-Weinberg (HW) equilibrium. Several hypothesis tests of HW have been established, as well as a variety of point estimation methods for the parameter which measures departures from HW under the inbreeding model. We propose a computational, Bayesian method for assessing departures from HW, which has a number of important advantages over existing approaches. The method incorporates the effects-of uncertainty about the nuisance parameters--the allele frequencies--as well as the boundary constraints on f (which are functions of the nuisance parameters). Results are naturally presented visually, exploiting the graphics capabilities of modern computer environments to allow straightforward interpretation. Perhaps most importantly, the method is founded on a flexible, likelihood-based modelling framework, which can incorporate the inbreeding model if appropriate, but also allows the assumptions of the model to he investigated and, if necessary, relaxed. Under appropriate conditions, information can be shared across loci and, possibly, across populations, leading to more precise estimation. The advantages of the method are illustrated by application both to simulated data and to data analysed by alternative methods in the recent literature.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
We consider the dynamics of a system of interacting spins described by the Ginzburg-Landau Hamiltonian. The method used is Zwanzig's version of the projection-operator method, in contrast to previous derivations in which we used Mori's version of this method. It is proved that both methods produce the same answer for the Green's function. We also make contact between the projection-operator method and critical dynamics.
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency`s technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
The aim of this work was to perform a systematic study of the parameters that can influence the composition, morphology, and catalytic activity of PtSn/C nanoparticles and compare two different methods of nanocatalyst preparation, namely microwave-assisted heating (MW) and thermal decomposition of polymeric precursors (DPP). An investigation of the effects of the reducing and stabilizing agents on the catalytic activity and morphology of Pt75Sn25/C catalysts prepared by microwave-assisted heating was undertaken for optimization purposes. The effect of short-chain alcohols such as ethanol, ethylene glycol, and propylene glycol as reducing agents was evaluated, and the use of sodium acetate and citric acid as stabilizing agents for the MW procedure was examined. Catalysts obtained from propylene glycol displayed higher catalytic activity compared with catalysts prepared in ethylene glycol. Introduction of sodium acetate enhanced the catalytic activity, but this beneficial effect was observed until a critical acetate concentration was reached. Optimization of the MW synthesis allowed for the preparation of highly dispersed catalysts with average sizes lying between 2.0 and 5.0 nm. Comparison of the best catalyst prepared by MW with a catalyst of similar composition prepared by the polymeric precursors method showed that the catalytic activity of the material can be improved when a proper condition for catalyst preparation is achieved. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A total internal reflection-based differencial refractometer, capable of measuring the real and imaginary parts of the complex refractive index in real time, is presented. The device takes advantage of the phase difference acquired by s- and p-polarized light to generate an easily detectable minimum at the reflected profile. The method allows to sensitively measuring transparent and turbid liquid samples. (C)2012 Optical Society of America
Resumo:
Reproducing Fourier's law of heat conduction from a microscopic stochastic model is a long standing challenge in statistical physics. As was shown by Rieder, Lebowitz and Lieb many years ago, a chain of harmonically coupled oscillators connected to two heat baths at different temperatures does not reproduce the diffusive behaviour of Fourier's law, but instead a ballistic one with an infinite thermal conductivity. Since then, there has been a substantial effort from the scientific community in identifying the key mechanism necessary to reproduce such diffusivity, which usually revolved around anharmonicity and the effect of impurities. Recently, it was shown by Dhar, Venkateshan and Lebowitz that Fourier's law can be recovered by introducing an energy conserving noise, whose role is to simulate the elastic collisions between the atoms and other microscopic degrees of freedom, which one would expect to be present in a real solid. For a one-dimensional chain this is accomplished numerically by randomly flipping - under the framework of a Poisson process with a variable “rate of collisions" - the sign of the velocity of an oscillator. In this poster we present Langevin simulations of a one-dimensional chain of oscillators coupled to two heat baths at different temperatures. We consider both harmonic and anharmonic (quartic) interactions, which are studied with and without the energy conserving noise. With these results we are able to map in detail how the heat conductivity k is influenced by both anharmonicity and the energy conserving noise. We also present a detailed analysis of the behaviour of k as a function of the size of the system and the rate of collisions, which includes a finite-size scaling method that enables us to extract the relevant critical exponents. Finally, we show that for harmonic chains, k is independent of temperature, both with and without the noise. Conversely, for anharmonic chains we find that k increases roughly linearly with the temperature of a given reservoir, while keeping the temperature difference fixed.
Resumo:
A path integral simulation algorithm which includes a higher-order Trotter approximation (HOA)is analyzed and compared to an approach which includes the correct quantum mechanical pair interaction (effective Propagator (EPr)). It is found that the HOA algorithmconverges to the quantum limit with increasing Trotter number P as P^{-4}, while the EPr algorithm converges as P^{-2}.The convergence rate of the HOA algorithm is analyzed for various physical systemssuch as a harmonic chain,a particle in a double-well potential, gaseous argon, gaseous helium and crystalline argon. A new expression for the estimator for the pair correlation function in the HOA algorithm is derived. A new path integral algorithm, the hybrid algorithm, is developed.It combines an exact treatment of the quadratic part of the Hamiltonian and thehigher-order Trotter expansion techniques.For the discrete quantum sine-Gordon chain (DQSGC), it is shown that this algorithm works more efficiently than all other improved path integral algorithms discussed in this work. The new simulation techniques developed in this work allow the analysis of theDQSGC and disordered model systems in the highly quantum mechanical regime using path integral molecular dynamics (PIMD)and adiabatic centroid path integral molecular dynamics (ACPIMD).The ground state phonon dispersion relation is calculated for the DQSGC by the ACPIMD method.It is found that the excitation gap at zero wave vector is reduced by quantum fluctuations. Two different phases exist: One phase with a finite excitation gap at zero wave vector, and a gapless phase where the excitation gap vanishes.The reaction of the DQSGC to an external driving force is analyzed at T=0.In the gapless phase the system creeps if a small force is applied, and in the phase with a gap the system is pinned. At a critical force, the systems undergo a depinning transition in both phases and flow is induced. The analysis of the DQSGC is extended to models with disordered substrate potentials. Three different cases are analyzed: Disordered substrate potentials with roughness exponent H=0, H=1/2,and a model with disordered bond length. For all models, the ground state phonon dispersion relation is calculated.