998 resultados para cartographic visualization methods
Resumo:
This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.
Resumo:
This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.
Resumo:
MR connectomics is an emerging framework in neuro-science that combines diffusion MRI and whole brain tractography methodologies with the analytical tools of network science. In the present work we review the current methods enabling structural connectivity mapping with MRI and show how such data can be used to infer new information of both brain structure and function. We also list the technical challenges that should be addressed in the future to achieve high-resolution maps of structural connectivity. From the resulting tremendous amount of data that is going to be accumulated soon, we discuss what new challenges must be tackled in terms of methods for advanced network analysis and visualization, as well data organization and distribution. This new framework is well suited to investigate key questions on brain complexity and we try to foresee what fields will most benefit from these approaches.
Resumo:
Coronary magnetic resonance angiography (MRA) is a powerful noninvasive technique with high soft-tissue contrast for the visualization of the coronary anatomy without X-ray exposure. Due to the small dimensions and tortuous nature of the coronary arteries, a high spatial resolution and sufficient volumetric coverage have to be obtained. However, this necessitates scanning times that are typically much longer than one cardiac cycle. By collecting image data during multiple RR intervals, one can successfully acquire coronary MR angiograms. However, constant cardiac contraction and relaxation, as well as respiratory motion, adversely affect image quality. Therefore, sophisticated motion-compensation strategies are needed. Furthermore, a high contrast between the coronary arteries and the surrounding tissue is mandatory. In the present article, challenges and solutions of coronary imaging are discussed, and results obtained in both healthy and diseased states are reviewed. This includes preliminary data obtained with state-of-the-art techniques such as steady-state free precession (SSFP), whole-heart imaging, intravascular contrast agents, coronary vessel wall imaging, and high-field imaging. Simultaneously, the utility of electron beam computed tomography (EBCT) and multidetector computed tomography (MDCT) for the visualization of the coronary arteries is discussed.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
Four methods were tested to assess the fire-blight disease response on grafted pear plants. The leaves of the plants were inoculated with Erwinia amylovora suspensions by pricking with clamps, cutting with scissors, local infiltration, and painting a bacterial suspension onto the leaves with a paintbrush. The effects of the inoculation methods were studied in dose-time-response experiments carried out in climate chambers under quarantine conditions. A modified Gompertz model was used to analyze the disease-time relatiobbnships and provided information on the rate of infection progression (rg) and time delay to the start of symptoms (t0). The disease-pathogen-dose relationships were analyzed according to a hyperbolic saturation model in which the median effective dose (ED50) of the pathogen and maximum disease level (ymax) were determined. Localized infiltration into the leaf mesophile resulted in the early (short t0) but slow (low rg) development of infection whereas in leaves pricked with clamps disease symptoms developed late (long t0) but rapidly (high rg). Paintbrush inoculation of the plants resulted in an incubation period of medium length, a moderate rate of infection progression, and low ymax values. In leaves inoculated with scissors, fire-blight symptoms developed early (short t0) and rapidly (high rg), and with the lowest ED50 and the highest ymax
Resumo:
A short overview is given on the most important analytical body composition methods. Principles of the methods and advantages and limitations of the methods are discussed also in relation to other fields of research such as energy metabolism. Attention is given to some new developments in body composition research such as chemical multiple-compartment models, computerized tomography or nuclear magnetic resonance imaging (tissue level), and multifrequency bioelectrical impedance. Possible future directions of body composition research in the light of these new developments are discussed.
Resumo:
Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec
Resumo:
A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory
Resumo:
In the present paper we discuss and compare two different energy decomposition schemes: Mayer's Hartree-Fock energy decomposition into diatomic and monoatomic contributions [Chem. Phys. Lett. 382, 265 (2003)], and the Ziegler-Rauk dissociation energy decomposition [Inorg. Chem. 18, 1558 (1979)]. The Ziegler-Rauk scheme is based on a separation of a molecule into fragments, while Mayer's scheme can be used in the cases where a fragmentation of the system in clearly separable parts is not possible. In the Mayer scheme, the density of a free atom is deformed to give the one-atom Mulliken density that subsequently interacts to give rise to the diatomic interaction energy. We give a detailed analysis of the diatomic energy contributions in the Mayer scheme and a close look onto the one-atom Mulliken densities. The Mulliken density ρA has a single large maximum around the nuclear position of the atom A, but exhibits slightly negative values in the vicinity of neighboring atoms. The main connecting point between both analysis schemes is the electrostatic energy. Both decomposition schemes utilize the same electrostatic energy expression, but differ in how fragment densities are defined. In the Mayer scheme, the electrostatic component originates from the interaction of the Mulliken densities, while in the Ziegler-Rauk scheme, the undisturbed fragment densities interact. The values of the electrostatic energy resulting from the two schemes differ significantly but typically have the same order of magnitude. Both methods are useful and complementary since Mayer's decomposition focuses on the energy of the finally formed molecule, whereas the Ziegler-Rauk scheme describes the bond formation starting from undeformed fragment densities
Resumo:
OBJECTIVE: Ultrasounds are a useful tool when looking for indirect evidence in favor of pulmonary embolism. The aim of this study was to determine the incidence of acute cor pulmonale and deep venous thrombosis revealed by ultrasonographic techniques in a population of patients presenting with pulmonary embolism. METHODS: 96 consecutive patients with a mean (+/- SD) age of 65 +/- 15 years, admitted to our hospital for pulmonary embolism were included in this study. The diagnosis of pulmonary embolism was made either by spiral computed tomography or selective pulmonary angiography. Each patient subsequently underwent both trans-thoracic echocardiography and venous ultrasonography. The diagnostic criterion used for defining acute cor pulmonale by echocardiography was the right to left ventricular end-diastolic area ratio over (or equal to) 0.6. Diagnosis of deep venous thrombosis was supported by the visualization of thrombi or vein incompressibility and/or the absence of venous flow or loss of flow variability by venous ultrasonography. RESULTS: Using ultrasounds, an acute cor pulmonale was found in 63% of our patients while 79% were found to have deep venous thrombosis and 92% of the patients had either acute cor pulmonale or deep venous thrombosis or both. All of the patients with proximal pulmonary embolism had acute cor pulmonale and/or deep venous thrombosis. The presence of acute cor pulmonale on echocardiography was significantly higher in patients with proximal pulmonary embolism (p < 0.0001). CONCLUSION: This study emphasizes the potential value of ultrasonographic techniques in the diagnosis of acute pulmonary embolism.
Resumo:
There has been confusion about the subunit stoichiometry of the degenerin family of ion channels. Recently, a crystal structure of acid-sensing ion channel (ASIC) 1a revealed that it assembles as a trimer. Here, we used atomic force microscopy (AFM) to image unprocessed ASIC1a bound to mica. We detected a mixture of subunit monomers, dimers and trimers. In some cases, triple-subunit clusters were clearly visible, confirming the trimeric structure of the channel, and indicating that the trimer sometimes disaggregated after adhesion to the mica surface. This AFM-based technique will now enable us to determine the subunit arrangement within heteromeric ASICs.