925 resultados para Weighted histogram analysis method
Resumo:
Morphological descriptors are practical and essential biomarkers for diagnosis andtreatment selection for intracranial aneurysm management according to the current guidelinesin use. Nevertheless, relatively little work has been dedicated to improve the three-dimensionalquanti cation of aneurysmal morphology, automate the analysis, and hence reduce the inherentintra- and inter-observer variability of manual analysis. In this paper we propose a methodologyfor the automated isolation and morphological quanti cation of saccular intracranial aneurysmsbased on a 3D representation of the vascular anatomy.
Resumo:
PURPOSE: To evaluate the utility of inversion recovery with on-resonant water suppression (IRON) in combination with injection of the long-circulating monocrystalline iron oxide nanoparticle (MION)-47 for contrast material-enhanced magnetic resonance (MR) angiography. MATERIALS AND METhods: Experiments were approved by the institutional animal care committee. Eleven rabbits were imaged at baseline before injection of a contrast agent and then serially 5-30 minutes, 2 hours, 1 day, and 3 days after a single intravenous bolus injection of 80 micromol of MION-47 per kilogram of body weight (n = 6) or 250 micromol/kg MION-47 (n = 5). Conventional T1-weighted MR angiography and IRON MR angiography were performed on a clinical 3.0-T imager. Signal-to-noise and contrast-to-noise ratios were measured in the aorta of rabbits in vivo. Venous blood was obtained from the rabbits before and after MION-47 injection for use in phantom studies. RESULTS: In vitro blood that contained MION-47 appeared signal attenuated on T1-weighted angiograms, while characteristic signal-enhanced dipolar fields were observed on IRON angiograms. In vivo, the vessel lumen was signal attenuated on T1-weighted MR angiograms after MION-47 injection, while IRON supported high intravascular contrast by simultaneously providing positive signal within the vessels and suppressing background tissue (mean contrast-to-noise ratio, 61.9 +/- 12.4 [standard deviation] after injection vs 1.1 +/- 0.4 at baseline, P < .001). Contrast-to-noise ratio was higher on IRON MR angiograms than on conventional T1-weighted MR angiograms (9.0 +/- 2.5, P < .001 vs IRON MR angiography) and persisted up to 24 hours after MION-47 injection (76.2 +/- 15.9, P < .001 vs baseline). CONCLUSION: IRON MR angiography in conjunction with superparamagnetic nanoparticle administration provides high intravascular contrast over a long time and without the need for image subtraction.
Resumo:
Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.
Resumo:
Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
This paper establishes a general framework for metric scaling of any distance measure between individuals based on a rectangular individuals-by-variables data matrix. The method allows visualization of both individuals and variables as well as preserving all the good properties of principal axis methods such as principal components and correspondence analysis, based on the singular-value decomposition, including the decomposition of variance into components along principal axes which provide the numerical diagnostics known as contributions. The idea is inspired from the chi-square distance in correspondence analysis which weights each coordinate by an amount calculated from the margins of the data table. In weighted metric multidimensional scaling (WMDS) we allow these weights to be unknown parameters which are estimated from the data to maximize the fit to the original distances. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing a matrix and displaying its rows and columns in biplots.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
Explicitly correlated coupled-cluster calculations of intermolecular interaction energies for the S22 benchmark set of Jurecka, Sponer, Cerny, and Hobza (Chem. Phys. Phys. Chem. 2006, 8, 1985) are presented. Results obtained with the recently proposed CCSD(T)-F12a method and augmented double-zeta basis sets are found to be in very close agreement with basis set extrapolated conventional CCSD(T) results. Furthermore, we propose a dispersion-weighted MP2 (DW-MP2) approximation that combines the good accuracy of MP2 for complexes with predominately electrostatic bonding and SCS-MP2 for dispersion-dominated ones. The MP2-F12 and SCS-MP2-F12 correlation energies are weighted by a switching function that depends on the relative HF and correlation contributions to the interaction energy. For the S22 set, this yields a mean absolute deviation of 0.2 kcal/mol from the CCSD(T)-F12a results. The method, which allows obtaining accurate results at low cost, is also tested for a number of dimers that are not in the training set.
Resumo:
The study of the thermal behavior of complex packages as multichip modules (MCM¿s) is usually carried out by measuring the so-called thermal impedance response, that is: the transient temperature after a power step. From the analysis of this signal, the thermal frequency response can be estimated, and consequently, compact thermal models may be extracted. We present a method to obtain an estimate of the time constant distribution underlying the observed transient. The method is based on an iterative deconvolution that produces an approximation to the time constant spectrum while preserving a convenient convolution form. This method is applied to the obtained thermal response of a microstructure as analyzed by finite element method as well as to the measured thermal response of a transistor array integrated circuit (IC) in a SMD package.
Resumo:
Over the last 10 years, diffusion-weighted imaging (DWI) has become an important tool to investigate white matter (WM) anomalies in schizophrenia. Despite technological improvement and the exponential use of this technique, discrepancies remain and little is known about optimal parameters to apply for diffusion weighting during image acquisition. Specifically, high b-value diffusion-weighted imaging known to be more sensitive to slow diffusion is not widely used, even though subtle myelin alterations as thought to happen in schizophrenia are likely to affect slow-diffusing protons. Schizophrenia patients and healthy controls were scanned with a high b-value (4000s/mm(2)) protocol. Apparent diffusion coefficient (ADC) measures turned out to be very sensitive in detecting differences between schizophrenia patients and healthy volunteers even in a relatively small sample. We speculate that this is related to the sensitivity of high b-value imaging to the slow-diffusing compartment believed to reflect mainly the intra-axonal and myelin bound water pool. We also compared these results to a low b-value imaging experiment performed on the same population in the same scanning session. Even though the acquisition protocols are not strictly comparable, we noticed important differences in sensitivities in the favor of high b-value imaging, warranting further exploration.
Resumo:
The rate of carbon dioxide production is commonly used as a measure of microbial activity in the soil. The traditional method of CO2 determination involves trapping CO2 in an alkali solution and then determining CO2 concentration indirectly by titration of the remaining alkali in the solution. This method is still commonly employed in laboratories throughout the world due to its relative simplicity and the fact that it does not require expensive, specific equipment. However, there are several drawbacks: the method is time-consuming, requires large amounts of chemicals and the consistency of results depends on the operator's skills. With this in mind, an improved method was developed to analyze CO2 captured in alkali traps, which is cheap and relatively simple, with a substantially shorter sample handling time and reproducibility equivalent to the traditional titration method. A comparison of the concentration values determined by gas phase flow injection analysis (GPFIA) and titration showed no significant difference (p > 0.05), but GPFIA has the advantage that only a tenth of the sample volume of the titration method is required. The GPFIA system does not require the purchase of new, costly equipment but the device was constructed from items commonly found in laboratories, with suggestions for alternative configurations for other detection units. Furthermore, GPFIA for CO2 analysis can be equally applied to samples obtained from either the headspace of microcosms or from a sampling chamber that allows CO2 to be released from alkali trapping solutions. The optimised GPFIA method was applied to analyse CO2 released from degrading hydrocarbons from a site contaminated by diesel spillage.
Resumo:
Cerebral perfusion-weighted imaging (PWI) in neonates is known to be technically difficult and there are very few published studies on its use in preterm infants. In this paper, we describe one convenient method to perform PWI in neonates, a method only recently used in newborns. A device was used to manually inject gadolinium contrast material intravenously in an easy, quick and reproducible way. We studied 28 newborn infants, with various gestational ages and weights, including both normal infants and those suffering from different brain pathologies. A signal intensity-time curve was obtained for each infant, allowing us to build perfusion maps. This technique offered a fast and easy method to manually inject a bolus gadolinium contrast material, which is essential in performing PWI in neonates. Cerebral PWI is technically feasible and reproducible in neonates of various gestational age and with various pathologies.
Resumo:
Despite the considerable environmental importance of mercury (Hg), given its high toxicity and ability to contaminate large areas via atmospheric deposition, little is known about its activity in soils, especially tropical soils, in comparison with other heavy metals. This lack of information about Hg arises because analytical methods for determination of Hg are more laborious and expensive compared to methods for other heavy metals. The situation is even more precarious regarding speciation of Hg in soils since sequential extraction methods are also inefficient for this metal. The aim of this paper is to present a technique of thermal desorption associated with atomic absorption spectrometry, TDAAS, as an efficient tool for quantitative determination of Hg in soils. The method consists of the release of Hg by heating, followed by its quantification by atomic absorption spectrometry. It was developed by constructing calibration curves in different soil samples based on increasing volumes of standard Hg2+ solutions. Performance, accuracy, precision, and quantification and detection limit parameters were evaluated. No matrix interference was detected. Certified reference samples and comparison with a Direct Mercury Analyzer, DMA (another highly recognized technique), were used in validation of the method, which proved to be accurate and precise.