940 resultados para Critical chain method
Resumo:
We consider the dynamics of a system of interacting spins described by the Ginzburg-Landau Hamiltonian. The method used is Zwanzig's version of the projection-operator method, in contrast to previous derivations in which we used Mori's version of this method. It is proved that both methods produce the same answer for the Green's function. We also make contact between the projection-operator method and critical dynamics.
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency`s technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
The aim of this work was to perform a systematic study of the parameters that can influence the composition, morphology, and catalytic activity of PtSn/C nanoparticles and compare two different methods of nanocatalyst preparation, namely microwave-assisted heating (MW) and thermal decomposition of polymeric precursors (DPP). An investigation of the effects of the reducing and stabilizing agents on the catalytic activity and morphology of Pt75Sn25/C catalysts prepared by microwave-assisted heating was undertaken for optimization purposes. The effect of short-chain alcohols such as ethanol, ethylene glycol, and propylene glycol as reducing agents was evaluated, and the use of sodium acetate and citric acid as stabilizing agents for the MW procedure was examined. Catalysts obtained from propylene glycol displayed higher catalytic activity compared with catalysts prepared in ethylene glycol. Introduction of sodium acetate enhanced the catalytic activity, but this beneficial effect was observed until a critical acetate concentration was reached. Optimization of the MW synthesis allowed for the preparation of highly dispersed catalysts with average sizes lying between 2.0 and 5.0 nm. Comparison of the best catalyst prepared by MW with a catalyst of similar composition prepared by the polymeric precursors method showed that the catalytic activity of the material can be improved when a proper condition for catalyst preparation is achieved. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A total internal reflection-based differencial refractometer, capable of measuring the real and imaginary parts of the complex refractive index in real time, is presented. The device takes advantage of the phase difference acquired by s- and p-polarized light to generate an easily detectable minimum at the reflected profile. The method allows to sensitively measuring transparent and turbid liquid samples. (C)2012 Optical Society of America
Resumo:
Reproducing Fourier's law of heat conduction from a microscopic stochastic model is a long standing challenge in statistical physics. As was shown by Rieder, Lebowitz and Lieb many years ago, a chain of harmonically coupled oscillators connected to two heat baths at different temperatures does not reproduce the diffusive behaviour of Fourier's law, but instead a ballistic one with an infinite thermal conductivity. Since then, there has been a substantial effort from the scientific community in identifying the key mechanism necessary to reproduce such diffusivity, which usually revolved around anharmonicity and the effect of impurities. Recently, it was shown by Dhar, Venkateshan and Lebowitz that Fourier's law can be recovered by introducing an energy conserving noise, whose role is to simulate the elastic collisions between the atoms and other microscopic degrees of freedom, which one would expect to be present in a real solid. For a one-dimensional chain this is accomplished numerically by randomly flipping - under the framework of a Poisson process with a variable “rate of collisions" - the sign of the velocity of an oscillator. In this poster we present Langevin simulations of a one-dimensional chain of oscillators coupled to two heat baths at different temperatures. We consider both harmonic and anharmonic (quartic) interactions, which are studied with and without the energy conserving noise. With these results we are able to map in detail how the heat conductivity k is influenced by both anharmonicity and the energy conserving noise. We also present a detailed analysis of the behaviour of k as a function of the size of the system and the rate of collisions, which includes a finite-size scaling method that enables us to extract the relevant critical exponents. Finally, we show that for harmonic chains, k is independent of temperature, both with and without the noise. Conversely, for anharmonic chains we find that k increases roughly linearly with the temperature of a given reservoir, while keeping the temperature difference fixed.
Resumo:
A path integral simulation algorithm which includes a higher-order Trotter approximation (HOA)is analyzed and compared to an approach which includes the correct quantum mechanical pair interaction (effective Propagator (EPr)). It is found that the HOA algorithmconverges to the quantum limit with increasing Trotter number P as P^{-4}, while the EPr algorithm converges as P^{-2}.The convergence rate of the HOA algorithm is analyzed for various physical systemssuch as a harmonic chain,a particle in a double-well potential, gaseous argon, gaseous helium and crystalline argon. A new expression for the estimator for the pair correlation function in the HOA algorithm is derived. A new path integral algorithm, the hybrid algorithm, is developed.It combines an exact treatment of the quadratic part of the Hamiltonian and thehigher-order Trotter expansion techniques.For the discrete quantum sine-Gordon chain (DQSGC), it is shown that this algorithm works more efficiently than all other improved path integral algorithms discussed in this work. The new simulation techniques developed in this work allow the analysis of theDQSGC and disordered model systems in the highly quantum mechanical regime using path integral molecular dynamics (PIMD)and adiabatic centroid path integral molecular dynamics (ACPIMD).The ground state phonon dispersion relation is calculated for the DQSGC by the ACPIMD method.It is found that the excitation gap at zero wave vector is reduced by quantum fluctuations. Two different phases exist: One phase with a finite excitation gap at zero wave vector, and a gapless phase where the excitation gap vanishes.The reaction of the DQSGC to an external driving force is analyzed at T=0.In the gapless phase the system creeps if a small force is applied, and in the phase with a gap the system is pinned. At a critical force, the systems undergo a depinning transition in both phases and flow is induced. The analysis of the DQSGC is extended to models with disordered substrate potentials. Three different cases are analyzed: Disordered substrate potentials with roughness exponent H=0, H=1/2,and a model with disordered bond length. For all models, the ground state phonon dispersion relation is calculated.
Resumo:
Real-time quantitative polymerase chain reaction (qPCR) depends on precise temperature control of the sample during cycling. In the current study, we investigated how temperature variation in plate-based qPCR instruments influences qPCR results. Temperature variation was measured by amplicon melting analysis as a convenient means to assess well-to-well differences. Multiple technical replicates of several SYBR Green I-based qPCR assays allowed correlation of relative well temperature to quantification cycle. We found that inadequate template denaturation results in an inverse correlation and requires increasing the denaturation temperature, adding a DNA destabilizing agent, or pretreating with a restriction enzyme. In contrast, inadequate primer annealing results in a direct correlation and requires lowering the annealing temperature. Significant correlations were found in 18 of 25 assays. The critical nature of temperature-dependent effects was shown in a blinded study of 29 patients for the diagnosis of Prader-Willy and Angelman syndromes, where eight diagnoses were incorrect unless temperature-dependent effects were controlled. A method to detect temperature-dependent effects by pairwise comparisons of replicates in routine experiments is presented and applied. Systematic temperature errors in qPCR instruments can be recognized and their effects eliminated when high precision is required in quantitative genetic diagnostics and critical complementary DNA analyses.
Resumo:
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a complicated target distribution via simple ergodic averages. A fundamental question in MCMC applications is when should the sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the MCMC sampling the first time the width of a confidence interval based on the ergodic averages is less than a user-specified value. Hence calculating Monte Carlo standard errors is a critical step in assessing the output of the simulation. In particular, we consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We describe sufficient conditions for the strong consistency and asymptotic normality of both methods and investigate their finite sample properties in a variety of examples.
Resumo:
Sidexflexing plastic chains are used increasingly in material handling due to their highly flexible conveying design and layout options. These systems are often equipped with so called modular belts. Due to their specific force transmission, detailed calculation methods are not yet available. In the following, a generally valid calculation approach is derived and its difference to existing solutions shown by examples.
Resumo:
The Centers for Disease Control estimates that foodborne diseases cause approximately 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths in the United States each year. The American public is becoming more health conscious and there has been an increase in the dietary intake of fresh fruits and vegetables. Affluence and demand for convenience has allowed consumers to opt for pre-processed packaged fresh fruits and vegetables. These pre-processed foods are considered Ready-to-Eat. They have many of the advantages of fresh produce without the inconvenience of processing at home. After seeing a decline in food-related illnesses between 1996 and 2004, due to an improvement in meat and poultry safety, tainted produce has tilted the numbers back. This has resulted in none of the Healthy People 2010 targets for food-related illness reduction being reached. Irradiation has been shown to be effective in eliminating many of the foodborne pathogens. The application of irradiation as a food safety treatment has been widely endorsed by many of the major associations involved with food safety and public health. Despite these endorsements there has been very little use of this technology to date for reducing the disease burden associated with the consumption of these products. A review of the available literature since the passage of the 1996 Food Quality Protection Act was conducted on the barriers to implementing irradiation as a food safety process for fresh fruits and vegetables. The impediments to adopting widespread utilization of irradiation food processing as a food safety measure involve a complex array of legislative, regulatory, industry, and consumer issues. The FDA’s approval process limits the expansion of the list of foods approved for the application of irradiation as a food safety process. There is also a lack of capacity within the industry to meet the needs of a geographically dispersed industry.^
Resumo:
Erosion potential and the effects of tillage can be evaluated from quantitative descriptions of soil surface roughness. The present study therefore aimed to fill the need for a reliable, low-cost and convenient method to measure that parameter. Based on the interpretation of micro-topographic shadows, this new procedure is primarily designed for use in the field after tillage. The principle underlying shadow analysis is the direct relationship between soil surface roughness and the shadows cast by soil structures under fixed sunlight conditions. The results obtained with this method were compared to the statistical indexes used to interpret field readings recorded by a pin meter. The tests were conducted on 4-m2 sandy loam and sandy clay loam plots divided into 1-m2 subplots tilled with three different tools: chisel, tiller and roller. The highly significant correlation between the statistical indexes and shadow analysis results obtained in the laboratory as well as in the field for all the soil?tool combinations proved that both variability (CV) and dispersion (SD) are accommodated by the new method. This procedure simplifies the interpretation of soil surface roughness and shortens the time involved in field operations by a factor ranging from 12 to 20.
Resumo:
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Resumo:
Objective: To evaluate the READER model for critical reading by comparing it with a free appraisal, and to explore what factors influence different components of the model.
Resumo:
We developed a real-time detection (RTD) polymerase chain reaction (PCR) with rapid thermal cycling to detect and quantify Pseudomonas aeruginosa in wound biopsy samples. This method produced a linear quantitative detection range of 7 logs, with a lower detection limit of 103 colony-forming units (CFU)/g tissue or a few copies per reaction. The time from sample collection to result was less than 1h. RTD-PCR has potential for rapid quantitative detection of pathogens in critical care patients, enabling early and individualized treatment.
Resumo:
Position 57 in the beta chain of HLA class II molecules maintains an Asp/non-Asp dimorphism that has been conserved through evolution and is implicated in susceptibility to some autoimmune diseases. The latter effect may be due to the influence of this residue on the ability of class II alleles to bind specific pathogenic peptides. We utilized highly homologous pairs of both DR and DQ alleles that varied at residue 57 to investigate the impact of this dimorphism on binding of model peptides. Using a direct binding assay of biotinylated peptides on whole cells expressing the desired alleles, we report several peptides that bind differentially to the allele pairs depending on the presence or absence of Asp at position 57. Peptides with negatively charged residues at anchor position 9 bind well to alleles not containing Asp at position 57 in the beta chain but cannot bind well to homologous Asp-positive alleles. By changing the peptides at the single residue predicted to interact with this position 57, we demonstrate a drastically altered or reversed pattern of binding. Ala analog peptides confirm these interactions and identify a limited set of interaction sites between the bound peptides and the class II molecules. Clarification of the impact of specific class II polymorphisms on generating unique allele-specific peptide binding "repertoires" will aid in our understanding of the development of specific immune responses and HLA-associated diseases.