992 resultados para GRADIENT METHODS
Resumo:
INTRODUCTION: Eddy currents induced by switching of magnetic field gradients can lead to distortions in short echo-time spectroscopy or diffusion weighted imaging. In small bore magnets, such as human head-only systems, minimization of eddy current effects is more demanding because of the proximity of the gradient coil to conducting structures. METHODS: In the present study, the eddy current behavior achievable on a recently installed 7 tesla-68 cm bore head-only magnet was characterized. RESULTS: Residual effects after compensation were shown to be on the same order of magnitude as those measured on two whole body systems (3 and 4.7 T), while using two to three fold increased gradient slewrates.
Resumo:
This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.
Resumo:
This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.
Resumo:
PURPOSE: To objectively compare quantitative parameters related to image quality attained at coronary magnetic resonance (MR) angiography of the right coronary artery (RCA) performed at 7 T and 3 T. MATERIALS AND METHODS: Institutional review board approval was obtained, and volunteers provided signed informed consent. Ten healthy adult volunteers (mean age ± standard deviation, 25 years ± 4; seven men, three women) underwent navigator-gated three-dimensional MR angiography of the RCA at 7 T and 3 T. For 7 T, a custom-built quadrature radiofrequency transmit-receive surface coil was used. At 3 T, a commercial body radiofrequency transmit coil and a cardiac coil array for signal reception were used. Segmented k-space gradient-echo imaging with spectrally selective adiabatic fat suppression was performed, and imaging parameters were similar at both field strengths. Contrast-to-noise ratio between blood and epicardial fat; signal-to-noise ratio of the blood pool; RCA vessel sharpness, diameter, and length; and navigator efficiency were quantified at both field strengths and compared by using a Mann-Whitney U test. RESULTS: The contrast-to-noise ratio between blood and epicardial fat was significantly improved at 7 T when compared with that at 3 T (87 ± 34 versus 52 ± 13; P = .01). Signal-to-noise ratio of the blood pool was increased at 7 T (109 ± 47 versus 67 ± 19; P = .02). Vessel sharpness obtained at 7 T was also higher (58% ± 9 versus 50% ± 5; P = .04). At the same time, RCA vessel diameter and length and navigator efficiency showed no significant field strength-dependent difference. CONCLUSION: In our quantitative and qualitative study comparing in vivo human imaging of the RCA at 7 T and 3 T in young healthy volunteers, parameters related to image quality attained at 7 T equal or surpass those from 3 T.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
Four methods were tested to assess the fire-blight disease response on grafted pear plants. The leaves of the plants were inoculated with Erwinia amylovora suspensions by pricking with clamps, cutting with scissors, local infiltration, and painting a bacterial suspension onto the leaves with a paintbrush. The effects of the inoculation methods were studied in dose-time-response experiments carried out in climate chambers under quarantine conditions. A modified Gompertz model was used to analyze the disease-time relatiobbnships and provided information on the rate of infection progression (rg) and time delay to the start of symptoms (t0). The disease-pathogen-dose relationships were analyzed according to a hyperbolic saturation model in which the median effective dose (ED50) of the pathogen and maximum disease level (ymax) were determined. Localized infiltration into the leaf mesophile resulted in the early (short t0) but slow (low rg) development of infection whereas in leaves pricked with clamps disease symptoms developed late (long t0) but rapidly (high rg). Paintbrush inoculation of the plants resulted in an incubation period of medium length, a moderate rate of infection progression, and low ymax values. In leaves inoculated with scissors, fire-blight symptoms developed early (short t0) and rapidly (high rg), and with the lowest ED50 and the highest ymax
Resumo:
A short overview is given on the most important analytical body composition methods. Principles of the methods and advantages and limitations of the methods are discussed also in relation to other fields of research such as energy metabolism. Attention is given to some new developments in body composition research such as chemical multiple-compartment models, computerized tomography or nuclear magnetic resonance imaging (tissue level), and multifrequency bioelectrical impedance. Possible future directions of body composition research in the light of these new developments are discussed.
Resumo:
Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec
Resumo:
A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory
Resumo:
We report here a new empirical density functional that is constructed based on the performance of OPBE and PBE for spin states and SN 2 reaction barriers and how these are affected by different regions of the reduced gradient expansion. In a previous study [Swart, Sol̀, and Bickelhaupt, J. Comput. Methods Sci. Eng. 9, 69 (2009)] we already reported how, by switching between OPBE and PBE, one could obtain both the good performance of OPBE for spin states and reaction barriers and that of PBE for weak interactions within one and the same (SSB-sw) functional. Here we fine tuned this functional and include a portion of the KT functional and Grimme's dispersion correction to account for π- π stacking. Our new SSB-D functional is found to be a clear improvement and functions very well for biological applications (hydrogen bonding, π -π stacking, spin-state splittings, accuracy of geometries, reaction barriers)
Resumo:
In the present paper we discuss and compare two different energy decomposition schemes: Mayer's Hartree-Fock energy decomposition into diatomic and monoatomic contributions [Chem. Phys. Lett. 382, 265 (2003)], and the Ziegler-Rauk dissociation energy decomposition [Inorg. Chem. 18, 1558 (1979)]. The Ziegler-Rauk scheme is based on a separation of a molecule into fragments, while Mayer's scheme can be used in the cases where a fragmentation of the system in clearly separable parts is not possible. In the Mayer scheme, the density of a free atom is deformed to give the one-atom Mulliken density that subsequently interacts to give rise to the diatomic interaction energy. We give a detailed analysis of the diatomic energy contributions in the Mayer scheme and a close look onto the one-atom Mulliken densities. The Mulliken density ρA has a single large maximum around the nuclear position of the atom A, but exhibits slightly negative values in the vicinity of neighboring atoms. The main connecting point between both analysis schemes is the electrostatic energy. Both decomposition schemes utilize the same electrostatic energy expression, but differ in how fragment densities are defined. In the Mayer scheme, the electrostatic component originates from the interaction of the Mulliken densities, while in the Ziegler-Rauk scheme, the undisturbed fragment densities interact. The values of the electrostatic energy resulting from the two schemes differ significantly but typically have the same order of magnitude. Both methods are useful and complementary since Mayer's decomposition focuses on the energy of the finally formed molecule, whereas the Ziegler-Rauk scheme describes the bond formation starting from undeformed fragment densities
Resumo:
BACKGROUND: Differences in morbidity and mortality between socioeconomic groups constitute one of the most consistent findings of epidemiologic research. However, research on social inequalities in health has yet to provide a comprehensive understanding of the mechanisms underlying this association. In recent analysis, we showed health behaviours, assessed longitudinally over the follow-up, to explain a major proportion of the association of socioeconomic status (SES) with mortality in the British Whitehall II study. However, whether health behaviours are equally important mediators of the SES-mortality association in different cultural settings remains unknown. In the present paper, we examine this issue in Whitehall II and another prospective European cohort, the French GAZEL study. METHODS AND FINDINGS: We included 9,771 participants from the Whitehall II study and 17,760 from the GAZEL study. Over the follow-up (mean 19.5 y in Whitehall II and 16.5 y in GAZEL), health behaviours (smoking, alcohol consumption, diet, and physical activity), were assessed longitudinally. Occupation (in the main analysis), education, and income (supplementary analysis) were the markers of SES. The socioeconomic gradient in smoking was greater (p<0.001) in Whitehall II (odds ratio [OR] = 3.68, 95% confidence interval [CI] 3.11-4.36) than in GAZEL (OR = 1.33, 95% CI 1.18-1.49); this was also true for unhealthy diet (OR = 7.42, 95% CI 5.19-10.60 in Whitehall II and OR = 1.31, 95% CI 1.15-1.49 in GAZEL, p<0.001). Socioeconomic differences in mortality were similar in the two cohorts, a hazard ratio of 1.62 (95% CI 1.28-2.05) in Whitehall II and 1.94 in GAZEL (95% CI 1.58-2.39) for lowest versus highest occupational position. Health behaviours attenuated the association of SES with mortality by 75% (95% CI 44%-149%) in Whitehall II but only by 19% (95% CI 13%-29%) in GAZEL. Analysis using education and income yielded similar results. CONCLUSIONS: Health behaviours were strong predictors of mortality in both cohorts but their association with SES was remarkably different. Thus, health behaviours are likely to be major contributors of socioeconomic differences in health only in contexts with a marked social characterisation of health behaviours. Please see later in the article for the Editors' Summary.
Resumo:
Background: Cardiac computed tomographic scans, coronary angiograms, and aortographies are routinely performed in transcatheter heart valve therapies. Consequently, all patients are exposed to multiple contrast injections with a following risk of nephrotoxicity and postoperative renal failure. The transapical aortic valve implantation without angiography can prevent contrast-related complications. Methods: Between November 2008 and November 2009, 30 consecutive high-risk patients (16 female, 53.3%) underwent transapical aortic valve implantation without angiography. The landmarks identification, the stent-valve positioning, and the postoperative control were routinely performed under transesophageal echocardiogram and fluoroscopic visualization without contrast injections. Results: Mean age was 80.1 +/- 8.7 years. Mean valve gradient, aortic orifice area, and ejection fraction were 60.3 +/- 20.9 mm Hg, 0.7 +/- 0.16 cm(2), and 0.526 +/- 0.128, respectively. Risk factors were pulmonary hypertension (60%), peripheral vascular disease (70%), chronic pulmonary disease (50%), previous cardiac surgery (13.3%), and chronic renal insufficiency (40%) (mean blood creatinine and urea levels: 96.8 +/- 54 mu g/dL and 8.45 +/- 5.15 mmol/L). Average European System for Cardiac Operative Risk Evaluation was 32.2 +/- 13.3%. Valve deployment in the ideal landing zone was 96.7% successful and valve embolization occurred once. Thirty-day mortality was 10% (3 patients). Causes of death were the following: intraoperative ventricular rupture (conversion to sternotomy), right ventricular failure, and bilateral pneumonia. Stroke occurred in one patient at postoperative day 9. Renal failure (postoperative mean blood creatinine and urea levels: 91.1 +/- 66.8 mu g/dL and 7.27 +/- 3.45 mmol/L), myocardial infarction, and atrioventricular block were not detected. Conclusions: Transapical aortic valve implantation without angiography requires a short learning curve and can be performed routinely by experienced teams. Our report confirms that this procedure is feasible and safe, and provides good results with low incidence of postoperative renal disorders. (Ann Thorac Surg 2010; 89: 1925-33) (C) 2010 by The Society of Thoracic Surgeons
Resumo:
Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.