974 resultados para Conformal Mapping Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Leximancer system is a relatively new method for transforming lexical co-occurrence information from natural language into semantic patterns in an unsupervised manner. It employs two stages of co-occurrence information extraction-semantic and relational-using a different algorithm for each stage. The algorithms used are statistical, but they employ nonlinear dynamics and machine learning. This article is an attempt to validate the output of Leximancer, using a set of evaluation criteria taken from content analysis that are appropriate for knowledge discovery tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method for characterizing microscopic optical force fields. Two dimensional vector force maps are generated by measuring the optical force applied to a probe particle for a grid of particle positions. The method is used to map Out the force field created by the beam from a lensed fiber inside a liquid filled microdevice. We find transverse gradient forces and axial scattering forces on the order of 2 pN per 10 mW laser power which are constant over a considerable axial range (> 35 mu m). These findings suggest Future useful applications of lensed fibers for particle guiding/sorting. The propulsion of a small particle at a constant velocity of 200 mu m s(-1) is shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new passive shim design method is presented which is based on a magnetization mapping approach. Well defined regions with similar magnetization values define the optimal number of passive shims, their shape and position. The new design method is applied in a shimming process without prior-axial shim localization; this reduces the possibility of introducing new errors. The new shim design methodology reduces the number of iterations and the quantity of material required to shim a magnet. Only a few iterations (1-5) are required to shim a whole body horizontal bore magnet with a manufacturing error tolerance larger than 0.1 mm and smaller than 0.5 mm. One numerical example is presented

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Government agencies responsible for riparian environments are assessing the combined utility of field survey and remote sensing for mapping and monitoring indicators of riparian zone condition. The objective of this work was to compare the Tropical Rapid Appraisal of Riparian Condition (TRARC) method to a satellite image based approach. TRARC was developed for rapid assessment of the environmental condition of savanna riparian zones. The comparison assessed mapping accuracy, representativeness of TRARC assessment, cost-effectiveness, and suitability for multi-temporal analysis. Two multi-spectral QuickBird images captured in 2004 and 2005 and coincident field data covering sections of the Daly River in the Northern Territory, Australia were used in this work. Both field and image data were processed to map riparian health indicators (RHIs) including percentage canopy cover, organic litter, canopy continuity, stream bank stability, and extent of tree clearing. Spectral vegetation indices, image segmentation and supervised classification were used to produce RHI maps. QuickBird image data were used to examine if the spatial distribution of TRARC transects provided a representative sample of ground based RHI measurements. Results showed that TRARC transects were required to cover at least 3% of the study area to obtain a representative sample. The mapping accuracy and costs of the image based approach were compared to those of the ground based TRARC approach. Results proved that TRARC was more cost-effective at smaller scales (1-100km), while image based assessment becomes more feasible at regional scales (100-1000km). Finally, the ability to use both the image and field based approaches for multi-temporal analysis of RHIs was assessed. Change detection analysis demonstrated that image data can provide detailed information on gradual change, while the TRARC method was only able to identify more gross scale changes. In conclusion, results from both methods were considered to complement each other if used at appropriate spatial scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper, addresses the problem of novelty detection in the case that the observed data is a mixture of a known 'background' process contaminated with an unknown other process, which generates the outliers, or novel observations. The framework we describe here is quite general, employing univariate classification with incomplete information, based on knowledge of the distribution (the 'probability density function', 'pdf') of the data generated by the 'background' process. The relative proportion of this 'background' component (the 'prior' 'background' 'probability), the 'pdf' and the 'prior' probabilities of all other components are all assumed unknown. The main contribution is a new classification scheme that identifies the maximum proportion of observed data following the known 'background' distribution. The method exploits the Kolmogorov-Smirnov test to estimate the proportions, and afterwards data are Bayes optimally separated. Results, demonstrated with synthetic data, show that this approach can produce more reliable results than a standard novelty detection scheme. The classification algorithm is then applied to the problem of identifying outliers in the SIC2004 data set, in order to detect the radioactive release simulated in the 'oker' data set. We propose this method as a reliable means of novelty detection in the emergency situation which can also be used to identify outliers prior to the application of a more general automatic mapping algorithm. © Springer-Verlag 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetoencephalography (MEG) can be used to reconstruct neuronal activity with high spatial and temporal resolution. However, this reconstruction problem is ill-posed, and requires the use of prior constraints in order to produce a unique solution. At present there are a multitude of inversion algorithms, each employing different assumptions, but one major problem when comparing the accuracy of these different approaches is that often the true underlying electrical state of the brain is unknown. In this study, we explore one paradigm, retinotopic mapping in the primary visual cortex (V1), for which the ground truth is known to a reasonable degree of accuracy, enabling the comparison of MEG source reconstructions with the true electrical state of the brain. Specifically, we attempted to localize, using a beanforming method, the induced responses in the visual cortex generated by a high contrast, retinotopically varying stimulus. Although well described in primate studies, it has been an open question whether the induced gamma power in humans due to high contrast gratings derives from V1 rather than the prestriate cortex (V2). We show that the beanformer source estimate in the gamma and theta bands does vary in a manner consistent with the known retinotopy of V1. However, these peak locations, although retinotopically organized, did not accurately localize to the cortical surface. We considered possible causes for this discrepancy and suggest that improved MEG/magnetic resonance imaging co-registration and the use of more accurate source models that take into account the spatial extent and shape of the active cortex may, in future, improve the accuracy of the source reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents and demonstrates a method for using magnetic resonance imaging to measure local pressure of a fluid saturating a porous medium. The method is tested both in a static system of packed silica gel and in saturated sintered glass cylinders experiencing fluid flow. The fluid used contains 3% gas in the form of 3-μm average diameter gas filled 1,2-distearoyl-sn-glycero-3-phosphocholine (C18:0, MW: 790.16) liposomes suspended in 5% glycerol and 0.5% Methyl cellulose with water. Preliminary studies at 2.35 T demonstrate relative magnetic resonance signal changes of 20% per bar in bulk fluid for an echo time TE=40 ms, and 6-10% in consolidated porous media for TE=10 ms, over the range 0.8-1.8 bar for a spatial resolution of 0.1 mm3 and a temporal resolution of 30 s. The stability of this solution with relation to applied pressure and methods for improving sensitivity are discussed. © 2007 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The Aston Medication Adherence Study was designed to examine non-adherence to prescribed medicines within an inner-city population using general practice (GP) prescribing data. Objective: To examine non-adherence patterns to prescribed oralmedications within three chronic disease states and to compare differences in adherence levels between various patient groups to assist the routine identification of low adherence amongst patients within the Heart of Birmingham teaching Primary Care Trust (HoBtPCT). Setting: Patients within the area covered by HoBtPCT (England) prescribed medication for dyslipidaemia, type-2 diabetes and hypothyroidism, between 2000 and 2010 inclusively. HoBtPCT's population was disproportionately young,with seventy per cent of residents fromBlack and Minority Ethnic groups. Method: Systematic computational analysis of all medication issue data from 76 GP surgeries dichotomised patients into two groups (adherent and non-adherent) for each pharmacotherapeutic agent within the treatment groups. Dichotomised groupings were further analysed by recorded patient demographics to identify predictors of lower adherence levels. Results were compared to an analysis of a self-reportmeasure of adherence [using the Modified Morisky Scale© (MMAS-8)] and clinical value data (cholesterol values) from GP surgery records. Main outcome: Adherence levels for different patient demographics, for patients within specific longterm treatment groups. Results: Analysis within all three groups showed that for patients with the following characteristics, adherence levels were statistically lower than for others; patients: younger than 60 years of age; whose religion is coded as "Islam"; whose ethnicity is coded as one of the Asian groupings or as "Caribbean", "Other Black" and "African"; whose primary language is coded as "Urdu" or "Bengali"; and whose postcodes indicate that they live within the most socioeconomically deprived areas of HoBtPCT. Statistically significant correlations between adherence status and results from the selfreport measure of adherence and of clinical value data analysis were found. Conclusion: Using data fromGP prescribing systems, a computerised tool to calculate individual adherence levels for oral pharmacotherapy for the treatment of diabetes, dyslipidaemia and hypothyroidism has been developed.The tool has been used to establish nonadherence levels within the three treatment groups and the demographic characteristics indicative of lower adherence levels, which in turn will enable the targeting of interventional support within HoBtPCT. © Koninklijke Nederlandse Maatschappij ter bevordering der Pharmacie 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generalized Wiener-Hopf equation and the approximation methods are used to propose a perturbed iterative method to compute the solutions of a general class of nonlinear variational inequalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): J.3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this project report I analyse how the practice of Body Mapping impacts the bodily performances of women classical musicians. The purpose is to study how the characteristics that define normative gender affect the body and its movement; to interrogate the body as the site where a patriarchal society constructs gender roles (more specifically, femininity); and consequently to assess the effects that these may produce in music performance. Drawing on interviews with six women classical musicians, autoethnography, and Body Mapping as a method, I created a workbook for women Body Mapping students. The goal of my research is to look into the possibilities of how the three fields—music performance, Body Mapping and feminist thought—can connect together, thus laying the groundwork for possible future research in this area. Even more, I seek to apply new approaches to music performance and to contribute, at a practical level, to the development of women classical musicians.