938 resultados para Quasi-analytical algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The study we assessed how often patients who are manifesting a myocardial infarction (MI) would not be considered candidates for intensive lipid-lowering therapy based on the current guidelines. Methods: In 355 consecutive patients manifesting ST elevation MI (STEMI), admission plasma C-reactive protein (CRP) was measured and Framingham risk score (FRS), PROCAM risk score, Reynolds risk score, ASSIGN risk score, QRISK, and SCORE algorithms were applied. Cardiac computed tomography and carotid ultrasound were performed to assess the coronary artery calcium score (CAC), carotid intima-media thickness (cIMT) and the presence of carotid plaques. Results: Less than 50% of STEMI patients would be identified as having high risk before the event by any of these algorithms. With the exception of FRS (9%), all other algorithms would assign low risk to about half of the enrolled patients. Plasma CRP was <1.0 mg/L in 70% and >2 mg/L in 14% of the patients. The average cIMT was 0.8 +/- 0.2 mm and only in 24% of patients was >= 1.0 mm. Carotid plaques were found in 74% of patients. CAC > 100 was found in 66% of patients. Adding CAC >100 plus the presence of carotid plaque, a high-risk condition would be identified in 100% of the patients using any of the above mentioned algorithms. Conclusion: More than half of patients manifesting STEMI would not be considered as candidates for intensive preventive therapy by the current clinical algorithms. The addition of anatomical parameters such as CAC and the presence of carotid plaques can substantially reduce the CVD risk underestimation. (C) 2010 Elsevier Ireland Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tightly constrained thermogravimetric experimental procedures (particle size < 212 mu m, sample mass 15.5 mg, CO2 reactant gas, near isothermal conditions) allow the reactivity of chars from high volatile New Zealand coals to be determined to a repeatability of +/-0.07 h(-1) at 900 degrees C and +/-0.5 h(-1) at 1100 degrees C. The procedure also provides proximate analyses information and affords a quick (< 90 min) comparison between different coal types as well as indicating likely operating conditions and problems associated with a particular coal or blend. A clear difference is evident between reactivities of differing New Zealand coal ranks. Between 900 and 1100 degrees C, bituminous coals increase thirtyfold in reactivity compared with fourfold for subbituminous, with the latter being three to five times greater in reactivity at higher temperature. (C) 1997 Elsevier Science B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of parameter-space size adjustment is pn,posed in order to enable successful application of genetic algorithms to continuous optimization problems. Performance of genetic algorithms with six different combinations of selection and reproduction mechanisms, with and without parameter-space size adjustment, were severely tested on eleven multiminima test functions. An algorithm with the best performance was employed for the determination of the model parameters of the optical constants of Pt, Ni and Cr.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We compared the lignin contents of tropical forages by different analytical methods and evaluated their correlations with parameters related to the degradation of neutral detergent fiber (NDF). The lignin content was evaluated by five methods: cellulose solubilization in sulfuric acid [Lignin (sa)], oxidation with potassium permanganate [Lignin (pm)], the Klason lignin method (KL), solubilization in acetyl bromide from acid detergent fiber (ABLadf) and solubilization in acetyl bromide from the cell wall (ABLcw). Samples from ten grasses and ten legumes were used. The lignin content values obtained by gravimetric methods were also corrected for protein contamination, and the corrected values were referred to as Lignin (sa)p, Lignin (pm)p and KLp. The indigestible fraction of NDF (iNDF), the discrete lag (LAG) and the fractional rate of degradation (kd) of NDF were estimated using an in vitro assay. Correcting for protein resulted in reductions (P < 0.05) in the lignin contents as measured by the Lignin (sa), Lignin (pm) and, especially, the KL methods. There was an interaction (P < 0.05) of analytical method and forage group for lignin content. In general, LKp method provided the higher (P < 0.05) lignin contents. The estimates of lignin content obtained by the Lignin (sa)p, Lignin (pm)p and LKp methods were associated (P > 0.05) with all of the NDF degradation parameters. However, the strongest correlation coefficients for all methods evaluated were obtained with Lignin (pm)p and KLp. The lignin content estimated by the ABLcw method did not correlate (P > 0.05) with any parameters of NDF degradation. There was a correlation (P < 0.05) between the lignin content estimated by the ABLadf method and iNDF content. Nonetheless, this correlation was weaker than those found with gravimetric methods. From these results, we concluded that the gravimetric methods produce residues that are contaminated by nitrogenous compounds. Adjustment for these contaminants is suggested, particularly for the KL method, to express lignin content with greater accuracy. The relationships between lignin content measurements and NDF degradation parameters can be better determined using KLp and Lignin (pm)p methods. (C) 2011 Elsevier B.V. All rights reserved.