987 resultados para Quantitative Interpretation
Resumo:
A metodologia de interpretação integrada dos dados geológicos e geofísicos observados em um perfil da margem continental do Ceará possibilita a identificação e a integração de características peculiares a cada tipo de dado. Dessa forma, é possível se definir a localização mais provável de feições estruturais importantes, tais como a fronteira entre as crostas continental e oceânica e o pé do talude, objeto do presente estudo. Segundo o Artigo 76 (parágrafo 4, item b) da Convenção das Nações Unidas para o Direito do Mar, o pé do talude é definido como o ponto de variação máxima do gradiente do talude na sua base. Entretanto, essa definição, apesar de simples no contexto fisiográfico, não é suficiente para propiciar a localização do pé do talude como preconiza a Convenção, razão pela qual se aplicam os métodos geofísicos. Dentro do contexto geofísico-geológico está implícita a interpretação quantitativa das anomalias gravimétricas ar-livre, que possibilita o delineamento do modelo geofísico representando a subsuperfície, cuja finalidade é subsidiar geologicamente a interpretação integrada dos dados aludidos. Um procedimento automático de ajuste de curvas combinando as técnicas de inversão de busca sistemática e a que utiliza derivadas foi usado com o propósito de gerar o modelo geofísico. A aplicação rigorosa de vínculos preliminarmente e a constante reavaliação desses vínculos através de um processo interativo entre a sísmica e a gravimetria, gerado durante a interpretação quantitativa das anomalias ar-livre, possibilitaram que o modelo geofísico final estivesse dentro dos padrões geológicos para área, notadamente quanto ao equilíbrio isostático (Teoria de Airy). O objetivo do presente trabalho é se estudar as características geológicas e geofísicas observadas ao longo de um perfil da margem continental do Ceará (LEPLAC III), notadamente quanto ao pé do talude, buscando estabelecer a aplicabilidade de uma metodologia de interpretação integrada desses dados, cuja finalidade é se definir de forma sistemática a localização mais provável para esta feição fisiográfica. A metodologia de interpretação integrada dos dados geológicos e geofísicos empregada mostrou-se eficiente para este objetivo. Foi possível se integrar: (i) a localização fisiográfica (distância da costa e profundidade) do pé do talude; (ii) a zona de instabilidade tectônica evidenciada pelos falhamentos, comuns nesta região; (iii) ao fim de uma zona magnética perturbada, associado a um ponto de mínimo na curva de anomalia magnética, e que possivelmente delimita o início de uma zona magnética quieta, denominada de anomalia E e (iv) um ponto de inflexão na curva de anomalia ar-livre, associado ao efeito gravimétrico do contraste de densidades entre as crostas continental, os sedimentos e a água do mar, evidenciado pela geometria do talude. Foi possível ainda se definir a localização mais provável para a fronteira entre as crostas continental e oceânica. Dada a rigorosidade na aplicação das técnicas de inversão e dos vínculos é provável que as correlações das características intrínsecas a cada tipo de dado efetuadas na conclusão desse trabalho tenham fundamento e possam ser confirmadas. A condição para isto é a aplicação da metodologia aqui estabelecida em um número maior de perfis.
Resumo:
Anomalias gravimétricas ar-livre de perfis perpendiculares a margem continental do tipo passiva apresentam uma configuração padrão. Esta configuração é, satisfatoriamente, explicada por um modelo geofísico formado por uma distribuição de descontinuidades horizontais bidimensionais. Um processo automático de busca aleatória é proposto para a interpretação quantitativa dos dados. Através do método de poliedros flexíves (Simplex), os parâmetros principais do modelo - o contraste de densidade, a profundidade, o rejeito e a localização de cada descontinuidade, puderam ser encontrados, admitindo uma relação número de pontos/número de parâmetros, a determinar, conveniente. Sobre a região do talude, as anomalias ar-livre da margem continental podem ser explicadas por uma única descontinuidade horizontal (degrau simples); e tendo que a resposta dos dados gravimétricos no domínio do número de onda contém informações sobre esta anomalia, foi proposto um procedimento gráfico iterativo para a análise espectral deste sinal. Aplicando a transformada de Fourier é possível determinar a profundidade e o rejeito da descontinuidade, e conhecendo estes parâmetros a densidade é calculada unicamente. O objetivo básico do uso destes procedimentos seria combinar os dois métodos de interpretação nos domínios do espaço e do número de onda, com a finalidade de obter soluções vinculadas mais plausíveis quanto ao contexto geológico esperado para a área estudada. Os dois procedimentos de interpretação foram aplicados nas anomalias gravimétricas ar-livre da margem continental norte brasileira, setor nordeste, abrangendo os estados do Maranhão ao Rio Grande do Norte. As respectivas capacidade de resolução de cada procedimento foram então analisadas. Demonstrou-se que a inversão realizada diretamente no domínio do espaço é mais favorável na interpretação das anomalias ar-livre, embora o tratamento espectral seja relativamente mais simples.
Resumo:
A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.
Resumo:
To improve quantitative interpretation of ice core aeolian dust records a systematic methodical comparison has been made involving methods of water-insoluble particle counting (Coulter Counter and laser-sensing particle detector), soluble ions (ion chromatography, IC, and continuous flow analysis, CFA), elemental analysis (inductively coupled plasma mass spectroscopy, ICP-MS, at pH 1 and after full acid digestion), and water-insoluble elemental analysis (proton induced X-ray emission, PIXE). Ice core samples covering the last deglaciation have been used from the EPICA Dome C (EDC) and the EPICA Dronning Maud Land (EDML) ice cores. All methods correlate very well amongst each other. The ratios of glacial age concentrations to Holocene concentrations, which are typically a factor ~100, differ significantly between the methods, but differences are limited to a factor < 2 for most methods with insoluble particles showing the largest change. The recovery of ICP-MS measurements depends on the digestion method and is different for different elements and during different climatic periods. EDC and EDML samples have similar dust composition, which suggests a common dust source or a common mixture of sources for the two sites. The analysed samples further reveal a change of dust composition during the last deglaciation.
Resumo:
Background: Tissue Doppler may be used to quantify regional left ventricular function but is limited by segmental variation of longitudinal velocity from base to apex and free to septal walls. We sought to overcome this by developing a composite of longitudinal and radial velocities. Methods and Results. We examined 82 unselected patients undergoing a standard dobutamine echocardiogram. Longitudinal velocity was obtained in the basal and mid segments of each wall using tissue Doppler in the apical views. Radial velocities were derived in the same segments using an automated border detection system and centerline method with regional chords grouped according to segment location and temporally averaged. In 25 patients at low probability of coronary disease, the pattern of regional variation in longitudinal velocity (higher in the septum) was the opposite of radial velocity (higher in the free wall) and the combination was homogenous. In 57 patients undergoing angiography, velocity in abnormal segments was less than normal segments using longitudinal (6.0 +/- 3.6 vs 9.0 +/- 2.2 cm/s, P = .01) and radial velocity (6.0 +/- 4.0 vs 8.0 +/- 3.9 cm/s, P = .02). However, the composite velocity permitted better separation of abnormal and normal segments (13.3 +/- 5.6 vs 17.5 +/- 4.2 cm/s, P = .001). There was no significant difference between the accuracy of this quantitative approach and expert visual wall motion analysis (81% vs 84%, P = .56). Conclusion: Regional variation of uni-dimensional myocardial velocities necessitates site-specific normal ranges, probably because of different fiber directions. Combined analysis of longitudinal and radial velocities allows the derivation of a composite velocity, which is homogenous in all segments and may allow better separation of normal and abnormal myocardium.
Resumo:
Quantification of stress echocardiography may overcome the training requirements and subjective nature of visual wall motion score (WMS) assessment, but quantitative approaches may be difficult to apply and require significant time for image processing. The integral of long-axis myocardial velocity is displacement, which may be represented as a color map over the left ventricular myocardium. This study was designed to explore the feasibility and accuracy of measuring long-axis myocardial displacement, derived from tissue Doppler, for the detection of coronary artery disease (CAD) during dobutamine stress echocardiography (DBE). One hundred thirty patients underwent standard DBE, including 30 patients at low risk of CAD, 30 patients with normal coronary angiography (both groups studied to define normal ranges of displacement), and 70 patients who underwent coronary angiography in whom the accuracy of normal ranges was tested. Regional myocardial displacement was obtained by analysis of color tissue Doppler apical images acquired at peak stress. Displacement was compared with WMS, and with the presence of CAD by angiography. The analysis time was 3.2 +/- 1.5 minutes per patient. Segmental displacement was correlated with wall motion (normal 7.4 +/- 3.2 mm, ischemia 5.8 +/- 4.2 mm, viability 4.6 +/- 3.0 mm, scar 4.5 +/- 3.5 mm, p <0.001). Reversal of normal base-apex displacement was an insensitive (19%) but specific (90%) marker of CAD. The sum of displacements within each vascular territory had a sensitivity and specificity of 89% and 79%, respectively, for prediction of significant CAD, compared with 86% and 78%, respectively, for WMS (p = NS). The displacements in the basal segments had a sensitivity and specificity of 83% and 78%, respectively (p = NS). Regional myocardial displacement during DBE is feasible and offers a fast and accurate method for the diagnosis of CAD. (C),2002 by Excerpta Medica, Inc.
Resumo:
Quantification of calcium in the cuticle of the fly larva Exeretonevra angustifrons was undertaken at the micron scale using wavelength dispersive X-ray microanalysis, analytical standards, and a full matrix correction. Calcium and phosphorus were found to be present in the exoskeleton in a ratio that indicates amorphous calcium phosphate. This was confirmed through electron diffraction of the calcium-containing tissue. Due to the pragmatic difficulties of measuring light elements, it is not uncommon in the field of entomology to neglect the use of matrix corrections when performing microanalysis of bulk insect specimens. To determine, firstly, whether such a strategy affects the outcome and secondly, which matrix correction is preferable, phi-rho (z) and ZAF matrix corrections were contrasted with each other and without matrix correction. The best estimate of the mineral phase was found to be given by using the phi-rho (z) correction. When no correction was made, the ratio of Ca to P fell outside the range for amorphous calcium phosphate, possibly leading to flawed interpretation of the mineral form when used on its own.
Resumo:
Interpretability and power of genome-wide association studies can be increased by imputing unobserved genotypes, using a reference panel of individuals genotyped at higher marker density. For many markers, genotypes cannot be imputed with complete certainty, and the uncertainty needs to be taken into account when testing for association with a given phenotype. In this paper, we compare currently available methods for testing association between uncertain genotypes and quantitative traits. We show that some previously described methods offer poor control of the false-positive rate (FPR), and that satisfactory performance of these methods is obtained only by using ad hoc filtering rules or by using a harsh transformation of the trait under study. We propose new methods that are based on exact maximum likelihood estimation and use a mixture model to accommodate nonnormal trait distributions when necessary. The new methods adequately control the FPR and also have equal or better power compared to all previously described methods. We provide a fast software implementation of all the methods studied here; our new method requires computation time of less than one computer-day for a typical genome-wide scan, with 2.5 M single nucleotide polymorphisms and 5000 individuals.
Resumo:
Results of plasma or urinary amino acids are used for suspicion, confirmation or exclusion of diagnosis, monitoring of treatment, prevention and prognosis in inborn errors of amino acid metabolism. The concentrations in plasma or whole blood do not necessarily reflect the relevant metabolite concentrations in organs such as the brain or in cell compartments; this is especially the case in disorders that are not solely expressed in liver and/or in those which also affect nonessential amino acids. Basic biochemical knowledge has added much to the understanding of zonation and compartmentation of expressed proteins and metabolites in organs, cells and cell organelles. In this paper, selected old and new biochemical findings in PKU, urea cycle disorders and nonketotic hyperglycinaemia are reviewed; the aim is to show that integrating the knowledge gained in the last decades on enzymes and transporters related to amino acid metabolism allows a more extensive interpretation of biochemical results obtained for diagnosis and follow-up of patients and may help to pose new questions and to avoid pitfalls. The analysis and interpretation of amino acid measurements in physiological fluids should not be restricted to a few amino acids but should encompass the whole quantitative profile and include other pathophysiological markers. This is important if the patient appears not to respond as expected to treatment and is needed when investigating new therapies. We suggest that amino acid imbalance in the relevant compartments caused by over-zealous or protocol-driven treatment that is not adjusted to the individual patient's needs may prolong catabolism and must be corrected
Resumo:
A recurring task in the analysis of mass genome annotation data from high-throughput technologies is the identification of peaks or clusters in a noisy signal profile. Examples of such applications are the definition of promoters on the basis of transcription start site profiles, the mapping of transcription factor binding sites based on ChIP-chip data and the identification of quantitative trait loci (QTL) from whole genome SNP profiles. Input to such an analysis is a set of genome coordinates associated with counts or intensities. The output consists of a discrete number of peaks with respective volumes, extensions and center positions. We have developed for this purpose a flexible one-dimensional clustering tool, called MADAP, which we make available as a web server and as standalone program. A set of parameters enables the user to customize the procedure to a specific problem. The web server, which returns results in textual and graphical form, is useful for small to medium-scale applications, as well as for evaluation and parameter tuning in view of large-scale applications, requiring a local installation. The program written in C++ can be freely downloaded from ftp://ftp.epd.unil.ch/pub/software/unix/madap. The MADAP web server can be accessed at http://www.isrec.isb-sib.ch/madap/.
Resumo:
We have developed a digital holographic microscope (DHM), in a transmission mode, especially dedicated to the quantitative visualization of phase objects such as living cells. The method is based on an original numerical algorithm presented in detail elsewhere [Cuche et al., Appl. Opt. 38, 6994 (1999)]. DHM images of living cells in culture are shown for what is to our knowledge the first time. They represent the distribution of the optical path length over the cell, which has been measured with subwavelength accuracy. These DHM images are compared with those obtained by use of the widely used phase contrast and Nomarski differential interference contrast techniques.
Resumo:
Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models
Resumo:
PURPOSE: Atherosclerosis results in a considerable medical and socioeconomic impact on society. We sought to evaluate novel magnetic resonance imaging (MRI) angiography and vessel wall sequences to visualize and quantify different morphologic stages of atherosclerosis in a Watanabe hereditary hyperlipidemic (WHHL) rabbit model. MATERIAL AND METHODS: Aortic 3D steady-state free precession angiography and subrenal aortic 3D black-blood fast spin-echo vessel wall imaging pre- and post-Gadolinium (Gd) was performed in 14 WHHL rabbits (3 normal, 6 high-cholesterol diet, and 5 high-cholesterol diet plus endothelial denudation) on a commercial 1.5 T MR system. Angiographic lumen diameter, vessel wall thickness, signal-/contrast-to-noise analysis, total vessel area, lumen area, and vessel wall area were analyzed semiautomatically. RESULTS: Pre-Gd, both lumen and wall dimensions (total vessel area, lumen area, vessel wall area) of group 2 + 3 were significantly increased when compared with those of group 1 (all P < 0.01). Group 3 animals had significantly thicker vessel walls than groups 1 and 2 (P < 0.01), whereas angiographic lumen diameter was comparable among all groups. Post-Gd, only diseased animals of groups 2 + 3 showed a significant (>100%) signal-to-noise ratio and contrast-to-noise increase. CONCLUSIONS: A combination of novel 3D magnetic resonance angiography and high-resolution 3D vessel wall MRI enabled quantitative characterization of various atherosclerotic stages including positive arterial remodeling and Gd uptake in a WHHL rabbit model using a commercially available 1.5 T MRI system.
Resumo:
Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.
Resumo:
For the detection and management of osteoporosis and osteoporosis-related fractures, quantitative ultrasound (QUS) is emerging as a relatively low-cost and readily accessible alternative to dual-energy X-ray absorptiometry (DXA) measurement of bone mineral density (BMD) in certain circumstances. The following is a brief, but thorough review of the existing literature with respect to the use of QUS in 6 settings: 1) assessing fragility fracture risk; 2) diagnosing osteoporosis; 3) initiating osteoporosis treatment; 4) monitoring osteoporosis treatment; 5) osteoporosis case finding; and 6) quality assurance and control. Many QUS devices exist that are quite different with respect to the parameters they measure and the strength of empirical evidence supporting their use. In general, heel QUS appears to be most tested and most effective. Overall, some, but not all, heel QUS devices are effective assessing fracture risk in some, but not all, populations, the evidence being strongest for Caucasian females over 55 years old. Otherwise, the evidence is fair with respect to certain devices allowing for the accurate diagnosis of likelihood of osteoporosis, and generally fair to poor in terms of QUS use when initiating or monitoring osteoporosis treatment. A reasonable protocol is proposed herein for case-finding purposes, which relies on a combined assessment of clinical risk factors (CR.F) and heel QUS. Finally, several recommendations are made for quality assurance and control.