976 resultados para topographic analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

"January 1985."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I present results of my evaluation to identify topographic lineaments that are potentially related to post-glacial faulting using bare-earth LiDAR topographic data near Ridley Island, British Columbia. The purpose of this evaluation has been to review bare-earth LiDAR data for evidence of post-glacial faulting in the area surrounding Ridley Island and provide a map of the potential faults to review and possibly field check. My work consisted of an extensive literature review to understand the tectonic, geologic, glacial and sea level history of the area and analysis of bare-earth LiDAR data for Ridley Island and the surrounding region. Ridley Island and the surrounding north coast of British Columbia have a long and complex tectonic and geologic history. The north coast of British Columbia consists of a series of accreted terranes and some post-accretionary deposits. The accreted terranes were attached to the North American continent during subduction of the Pacific Plate between approximately 200 Ma and 10 Ma. The terrane and post-accretionary deposits are metamorphosed sedimentary, volcanic and intrusive rocks. The rocks have experienced significant deformation and been intruded by plutonic bodies. Approximately 10 Ma subduction of the Pacific Plate beneath the North America Plate ceased along the central and north coast of British Columbia and the Queen Charlotte Fault Zone was formed. The Queen Charlotte Fault Zone is a transform-type fault that separates the Pacific Plate from the North America Plate. Within the past 1 million years, the area has experienced multiple glacial/interglacial cycles. The most recent glacial cycle occurred approximately 23,000 to 13,500 years ago. Few Quaternary deposits have been mapped in the area. The vast majority of seismicity around the northwest coast of British Columbia occurs along the Queen Charlotte Fault Zone. Numerous faults have been mapped in the area, but there is currently no evidence to suggest these faults are active (i.e. have evidence for post-glacial surface displacement or deformation). No earthquakes have been recorded within 50 km of Ridley Island. Several small earthquakes (less than magnitude 6) have been recorded within 100 km of the island. These earthquakes have not been correlated to active faults. GPS data suggests there is ongoing strain in the vicinity of Ridley Island. The strain has the potential to be released along faults, but the calculated strain may be a result of erroneous data or accommodated aseismically. Currently, the greatest known seismic hazard to Ridley Island is the Queen Charlotte Fault Zone. LiDAR data for Ridley Island, Digby Island, Lelu Island and portions of Kaien Island, Smith Island and the British Columbia mainland were reviewed and analyzed for evidence of postglacial faulting. The data showed a strong fabric across the landscape with a northwest-southeast trend that appears to mirror the observed foliation in the area. A total of 80 potential post-glacial faults were identified. Three lineaments are categorized as high, forty-one lineaments are categorized as medium and thirty-six lineaments are categorized as low. The identified features should be examined in the field to further assess potential activity. My analysis did not include areas outside of the LiDAR coverage; however faulting may be present there. LiDAR data analysis is only useful for detecting faults with surficial expressions. Faulting without obvious surficial expressions may be present in the study area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Columnar cell lesions (CCLs) of the breast are a spectrum of lesions that have posed difficulties to pathologists for many years, prompting discussion concerning their biologic and clinical significance. We present a study of CCL in context with hyperplasia of usual type (HUT) and the more advanced lesions ductal carcinoma in situ (DCIS) and invasive ductal carcinoma. A total of 81 lesions from 18 patients were subjected to a comprehensive morphologic review based upon a modified version of Schnitt's classification system for CCL, immunophenotypic analysis (estrogen receptor [ER], progesterone receptor [PgR], Her2/neu, cytokeratin 5/6 [CK5/6], cytokeratin 14 [CK14], E-cadherin, p53) and for the first time, a whole genome molecular analysis by comparative genomic hybridization. Multiple CCLs from 3 patients were studied in particular detail, with topographic information and/or showing a morphologic spectrum of CCL within individual terminal duct lobular units. CCLs were ER an PgR positive, CK5/6 and CK14 negative, exhibit low numbers of genetic alterations and recurrent 16q loss, features that are similar to those of low grade in situ and invasive carcinoma. The molecular genetic profiles closely reflect the degree of proliferation and atypia in CCL, indicating some of these lesions represent both a morphologic and molecular continuum. In addition, overlapping chromosomal alterations between CCL and more advanced lesions within individual terminal duct lobular units suggest a commonality in molecular evolution. These data further support the hypothesis that CCLs are a nonobligate, intermediary step in the development of some forms of low grade in situ and invasive carcinoma. Copyright: © 2005 Lippincott Williams & Wilkins, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper surveys the context of feature extraction by neural network approaches, and compares and contrasts their behaviour as prospective data visualisation tools in a real world problem. We also introduce and discuss a hybrid approach which allows us to control the degree of discriminatory and topographic information in the extracted feature space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To study the topographic distribution of the pathology in multiple system atrophy (MSA). Pattern analysis was carried out using a-synuclein immunohistochemistry in 10 MSA cases. The glial cytoplasmic inclusions (GCI) were distributed randomly or in large clusters. The neuronal inclusions (NI) and abnormal neurons were distributed in regular clusters. Clusters of the NI and abnormal neurons were spatially correlated whereas the GCI were not spatially correlated with either the NI or the abnormal neurons. The data suggest that the GCI represent the primary change in MSA and the neuronal pathology develops secondary to the glial pathology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topography of the visual evoked magnetic response (VEMR) to a pattern onset stimulus was studied in five normal subjects using a single channel BTi magnetometer. Topographic distributions were analysed at regular intervals following stimulus onset (chronotopograpby). Two distinct field distributions were observed with half field stimulation: (1) activity corresponding to the C11 m which remains stable for an average of 34 msec and (2) activity corresponding to the C111 m which remains stable for about 50 msec. However, the full field topography of the largest peak within the first 130 msec does not have a predictable latency or topography in different subjects. The data suggest that the appearance of this peak is dependent on the amplitude, latency and duration of the half field C11 m peaks and the efficiency of half field summation. Hence, topographic mapping is essential to correctly identify the C11 m peak in a full field response as waveform morphology, peak latency and polarity are not reliable indicators. © 1993.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Loss of optic nerve head (ONH) axons in primary open angle glaucoma (POAG) has been attributed to both mechanical and vascular factors. Confocal scanning laser ophthalmoscopy (cSLO) provides a promising tool for the topographic follow-up of the ONH in glaucoma, while scanning laser Doppler flowmetry (SLDF) facilitates the rapid non-invasive assessment of retinal capillary blood flow. The purposes of these investigations were to optimise the techniques and explore their potential to classify and monitor disease. Preliminary investigations explored the reproducibility and validity of cSLO and SLDF and showed that: For cSLO: In a model eye, measurements are accurate over a range of axial lengths. For best reproducibility, seven images per visit are required, with a contour line located on Elschnig's scleral ring and transferred automatically between images. For SLDF: Three perfusion images are required for optimum reproducibility. Physiological changes induced by gas perturbation can be measured. Cross-sectional comparison of groups of normal subjects and early POAG patients showed that: cSLO parameters differentiate the early POAG group. Blood volume measured by SLDF showed group differences in superior nasal retina only. Longitudinal investigation of ONH topography, haemodynamic and visual field indices in normal subjects and POAG patients showed that: cSLO detects topographical change over time more frequently in the POAG group. Important parameters include: C:D area ratio, cup and rim area, mean depth in contour, volumes above and below reference and surface. Factor analysis identified "cup" and "rim" factors that can be used to detect change over time in individual patients. Blood flow changes were most apparent in the inferior nasal peripapillary retina of the POAG group. Perimetry is of clinical value for the identification of glaucoma but is less sensitive than cSLO for monitoring glaucomatous change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To determine whether curve-fitting analysis of the ranked segment distributions of topographic optic nerve head (ONH) parameters, derived using the Heidelberg Retina Tomograph (HRT), provide a more effective statistical descriptor to differentiate the normal from the glaucomatous ONH. Methods: The sample comprised of 22 normal control subjects (mean age 66.9 years; S.D. 7.8) and 22 glaucoma patients (mean age 72.1 years; S.D. 6.9) confirmed by reproducible visual field defects on the Humphrey Field Analyser. Three 10°-images of the ONH were obtained using the HRT. The mean topography image was determined and the HRT software was used to calculate the rim volume, rim area to disc area ratio, normalised rim area to disc area ratio and retinal nerve fibre cross-sectional area for each patient at 10°-sectoral intervals. The values were ranked in descending order, and each ranked-segment curve of ordered values was fitted using the least squares method. Results: There was no difference in disc area between the groups. The group mean cup-disc area ratio was significantly lower in the normal group (0.204 ± 0.16) compared with the glaucoma group (0.533 ± 0.083) (p < 0.001). The visual field indices, mean deviation and corrected pattern S.D., were significantly greater (p < 0.001) in the glaucoma group (-9.09 dB ± 3.3 and 7.91 ± 3.4, respectively) compared with the normal group (-0.15 dB ± 0.9 and 0.95 dB ± 0.8, respectively). Univariate linear regression provided the best overall fit to the ranked segment data. The equation parameters of the regression line manually applied to the normalised rim area-disc area and the rim area-disc area ratio data, correctly classified 100% of normal subjects and glaucoma patients. In this study sample, the regression analysis of ranked segment parameters method was more effective than conventional ranked segment analysis, in which glaucoma patients were misclassified in approximately 50% of cases. Further investigation in larger samples will enable the calculation of confidence intervals for normality. These reference standards will then need to be investigated for an independent sample to fully validate the technique. Conclusions: Using a curve-fitting approach to fit ranked segment curves retains information relating to the topographic nature of neural loss. Such methodology appears to overcome some of the deficiencies of conventional ranked segment analysis, and subject to validation in larger scale studies, may potentially be of clinical utility for detecting and monitoring glaucomatous damage. © 2007 The College of Optometrists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Airborne LIDAR (Light Detecting and Ranging) is a relatively new technique that rapidly and accurately measures micro-topographic features. This study compares topography derived from LIDAR with subsurface karst structures mapped in 3-dimensions with ground penetrating radar (GPR). Over 500 km of LIDAR data were collected in 1995 by the NASA ATM instrument. The LIDAR data was processed and analyzed to identify closed depressions. A GPR survey was then conducted at a 200 by 600 m site to determine if the target features are associated with buried karst structures. The GPR survey resolved two major depressions in the top of a clay rich layer at ~10m depth. These features are interpreted as buried dolines and are associated spatially with subtle (< 1m) trough-like depressions in the topography resolved from the LIDAR data. This suggests that airborne LIDAR may be a useful tool for indirectly detecting subsurface features associated with sinkhole hazard.