886 resultados para Noisy 3D data


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ultrasonic measurement and imaging of tissue elasticity is currently under wide investigation and development as a clinical tool for the assessment of a broad range of diseases, but little account in this field has yet been taken of the fact that soft tissue is porous and contains mobile fluid. The ability to squeeze fluid out of tissue may have implications for conventional elasticity imaging, and may present opportunities for new investigative tools. When a homogeneous, isotropic, fluid-saturated poroelastic material with a linearly elastic solid phase and incompressible solid and fluid constituents is subjected to stress, the behaviour of the induced internal strain field is influenced by three material constants: the Young's modulus (E(s)) and Poisson's ratio (nu(s)) of the solid matrix and the permeability (k) of the solid matrix to the pore fluid. New analytical expressions were derived and used to model the time-dependent behaviour of the strain field inside simulated homogeneous cylindrical samples of such a poroelastic material undergoing sustained unconfined compression. A model-based reconstruction technique was developed to produce images of parameters related to the poroelastic material constants (E(s), nu(s), k) from a comparison of the measured and predicted time-dependent spatially varying radial strain. Tests of the method using simulated noisy strain data showed that it is capable of producing three unique parametric images: an image of the Poisson's ratio of the solid matrix, an image of the axial strain (which was not time-dependent subsequent to the application of the compression) and an image representing the product of the aggregate modulus E(s)(1-nu(s))/(1+nu(s))(1-2nu(s)) of the solid matrix and the permeability of the solid matrix to the pore fluid. The analytical expressions were further used to numerically validate a finite element model and to clarify previous work on poroelastography.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deals with Takagi-Sugeno (TS) fuzzy model identification of nonlinear systems using fuzzy clustering. In particular, an extended fuzzy Gustafson-Kessel (EGK) clustering algorithm, using robust competitive agglomeration (RCA), is developed for automatically constructing a TS fuzzy model from system input-output data. The EGK algorithm can automatically determine the 'optimal' number of clusters from the training data set. It is shown that the EGK approach is relatively insensitive to initialization and is less susceptible to local minima, a benefit derived from its agglomerate property. This issue is often overlooked in the current literature on nonlinear identification using conventional fuzzy clustering. Furthermore, the robust statistical concepts underlying the EGK algorithm help to alleviate the difficulty of cluster identification in the construction of a TS fuzzy model from noisy training data. A new hybrid identification strategy is then formulated, which combines the EGK algorithm with a locally weighted, least-squares method for the estimation of local sub-model parameters. The efficacy of this new approach is demonstrated through function approximation examples and also by application to the identification of an automatic voltage regulation (AVR) loop for a simulated 3 kVA laboratory micro-machine system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three-dimensional reconstruction from volumetric medical images (e.g. CT, MRI) is a well-established technology used in patient-specific modelling. However, there are many cases where only 2D (planar) images may be available, e.g. if radiation dose must be limited or if retrospective data is being used from periods when 3D data was not available. This study aims to address such cases by proposing an automated method to create 3D surface models from planar radiographs. The method consists of (i) contour extraction from the radiograph using an Active Contour (Snake) algorithm, (ii) selection of a closest matching 3D model from a library of generic models, and (iii) warping the selected generic model to improve correlation with the extracted contour.

This method proved to be fully automated, rapid and robust on a given set of radiographs. Measured mean surface distance error values were low when comparing models reconstructed from matching pairs of CT scans and planar X-rays (2.57–3.74 mm) and within ranges of similar studies. Benefits of the method are that it requires a single radiographic image to perform the surface reconstruction task and it is fully automated. Mechanical simulations of loaded bone with different levels of reconstruction accuracy showed that an error in predicted strain fields grows proportionally to the error level in geometric precision. In conclusion, models generated by the proposed technique are deemed acceptable to perform realistic patient-specific simulations when 3D data sources are unavailable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

REVERIE (REal and Virtual Engagement in Realistic Immersive Environments [1]) targets novel research to address the demanding challenges involved with developing state-of-the-art technologies for online human interaction. The REVERIE framework enables users to meet, socialise and share experiences online by integrating cutting-edge technologies for 3D data acquisition and processing, networking, autonomy and real-time rendering. In this paper, we describe the innovative research that is showcased through the REVERIE integrated framework through richly defined use-cases which demonstrate the validity and potential for natural interaction in a virtual immersive and safe environment. Previews of the REVERIE demo and its key research components can be viewed at www.youtube.com/user/REVERIEFP7.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La mise en registre 3D (opération parfois appelée alignement) est un processus de transformation d’ensembles de données 3D dans un même système de coordonnées afin d’en aligner les éléments communs. Deux ensembles de données alignés ensemble peuvent être les scans partiels des deux vues différentes d’un même objet. Ils peuvent aussi être deux modèles complets, générés à des moments différents, d’un même objet ou de deux objets distincts. En fonction des ensembles de données à traiter, les méthodes d’alignement sont classées en mise en registre rigide ou non-rigide. Dans le cas de la mise en registre rigide, les données sont généralement acquises à partir d’objets rigides. Le processus de mise en registre peut être accompli en trouvant une seule transformation rigide globale (rotation, translation) pour aligner l’ensemble de données source avec l’ensemble de données cible. Toutefois, dans le cas non-rigide, où les données sont acquises à partir d’objets déformables, le processus de mise en registre est plus difficile parce qu’il est important de trouver à la fois une transformation globale et des déformations locales. Dans cette thèse, trois méthodes sont proposées pour résoudre le problème de mise en registre non-rigide entre deux ensembles de données (représentées par des maillages triangulaires) acquises à partir d’objets déformables. La première méthode permet de mettre en registre deux surfaces se chevauchant partiellement. La méthode surmonte les limitations des méthodes antérieures pour trouver une grande déformation globale entre deux surfaces. Cependant, cette méthode est limitée aux petites déformations locales sur la surface afin de valider le descripteur utilisé. La seconde méthode est s’appuie sur le cadre de la première et est appliquée à des données pour lesquelles la déformation entre les deux surfaces est composée à la fois d’une grande déformation globale et de petites déformations locales. La troisième méthode, qui se base sur les deux autres méthodes, est proposée pour la mise en registre d’ensembles de données qui sont plus complexes. Bien que la qualité que elle fournit n’est pas aussi bonne que la seconde méthode, son temps de calcul est accéléré d’environ quatre fois parce que le nombre de paramètres optimisés est réduit de moitié. L’efficacité des trois méthodes repose sur des stratégies via lesquelles les correspondances sont déterminées correctement et le modèle de déformation est exploité judicieusement. Ces méthodes sont mises en oeuvre et comparées avec d’autres méthodes sur diverses données afin d’évaluer leur robustesse pour résoudre le problème de mise en registre non-rigide. Les méthodes proposées sont des solutions prometteuses qui peuvent être appliquées dans des applications telles que la mise en registre non-rigide de vues multiples, la reconstruction 3D dynamique, l’animation 3D ou la recherche de modèles 3D dans des banques de données.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Consolidation consists in scheduling multiple virtual machines onto fewer servers in order to improve resource utilization and to reduce operational costs due to power consumption. However, virtualization technologies do not offer performance isolation, causing applications’ slowdown. In this work, we propose a performance enforcing mechanism, composed of a slowdown estimator, and a interference- and power-aware scheduling algorithm. The slowdown estimator determines, based on noisy slowdown data samples obtained from state-of-the-art slowdown meters, if tasks will complete within their deadlines, invoking the scheduling algorithm if needed. When invoked, the scheduling algorithm builds performance and power aware virtual clusters to successfully execute the tasks. We conduct simulations injecting synthetic jobs which characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our strategy can be efficiently integrated with state-of-the-art slowdown meters to fulfil contracted SLAs in real-world environments, while reducing operational costs in about 12%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold

Relevância:

80.00% 80.00%

Publicador:

Resumo:

MapFish is an open-source development framework for building webmapping applications. MapFish is based on the OpenLayers API and the Geo extension of Ext library, and extends the Pylons general-purpose web development framework with geo-specific functionnalities. This presentation first describes what the MapFish development framework provides and how it can help developers implement rich web-mapping applications. It then demonstrates through real web-mapping realizations what can be achieved using MapFish : Geo Business Intelligence applications, 2D/3D data visualization, on/off line data edition, advanced vectorial print functionnalities, advanced administration suite to build WebGIS applications from scratch, etc. In particular, the web-mapping application for the UN Refugee Agency (UNHCR) and a Regional Spatial Data Infrastructure will be demonstrated

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente estudio de caso pretende explicar los mecanismos de respuesta que incorpora la política migratoria de España, específicamente la Ley Orgánica 4/2000. Este tipo de investigación permite observar el grado de incidencia entre las variables frente a las relaciones diplomáticas binacionales Colombia - España y tiene como objetivo analizar los mecanismos de respuesta de los migrantes colombianos frente a la política migratoria de España en el periodo 2005-2010. Se explican las acciones, proyectos y dimensiones de esta política migratoria y se analiza la posición de los migrantes colombianos frente a dicha política, que incide en aspectos relacionados con el empleo, la calidad de vida y la salud, mediante la teoría Push-Pull de Ernst Georg Ravenstein que permite establecer los mecanismos y las razones de aquellos que emigran. Finalmente, se determinan las acciones utilizadas por los grupos de presión que influyen en la articulación de las relaciones diplomáticas binacionales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vegetation and building morphology characteristics are investigated at 19 sites on a north-south LiDAR transect across the megacity of London. Local maxima of mean building height and building plan area density at the city centre are evident. Surprisingly, the mean vegetation height (zv3) is also found to be highest in the city centre. From the LiDAR data various morphological parameters are derived as well as shadow patterns. Continuous images of the effects of buildings and of buildings plus vegetationon sky view factor (Ψ) are derived. A general reduction of Ψ is found, indicating the importance of including vegetation when deriving Ψ in urban areas. The contribution of vegetation to the shadowing at ground level is higher during summer than in autumn. Using these 3D data the influence on urban climate and mean radiant temperature (T mrt ) is calculated with SOLWEIG. The results from these simulations highlight that vegetation can be most effective at reducing heat stress within dense urban environments in summer. The daytime average T mrt is found to be lowest in the densest urban environments due to shadowing; foremost from buildings but also from trees. It is clearly shown that this method could be used to quantify the influence of vegetation on T mrt within the urban environment. The results presented in this paper highlight a number of possible climate sensitive planning practices for urban areas at the local scale (i.e. 102- 5 × 103 m).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Developing successful navigation and mapping strategies is an essential part of autonomous robot research. However, hardware limitations often make for inaccurate systems. This project serves to investigate efficient alternatives to mapping an environment, by first creating a mobile robot, and then applying machine learning to the robot and controlling systems to increase the robustness of the robot system. My mapping system consists of a semi-autonomous robot drone in communication with a stationary Linux computer system. There are learning systems running on both the robot and the more powerful Linux system. The first stage of this project was devoted to designing and building an inexpensive robot. Utilizing my prior experience from independent studies in robotics, I designed a small mobile robot that was well suited for simple navigation and mapping research. When the major components of the robot base were designed, I began to implement my design. This involved physically constructing the base of the robot, as well as researching and acquiring components such as sensors. Implementing the more complex sensors became a time-consuming task, involving much research and assistance from a variety of sources. A concurrent stage of the project involved researching and experimenting with different types of machine learning systems. I finally settled on using neural networks as the machine learning system to incorporate into my project. Neural nets can be thought of as a structure of interconnected nodes, through which information filters. The type of neural net that I chose to use is a type that requires a known set of data that serves to train the net to produce the desired output. Neural nets are particularly well suited for use with robotic systems as they can handle cases that lie at the extreme edges of the training set, such as may be produced by "noisy" sensor data. Through experimenting with available neural net code, I became familiar with the code and its function, and modified it to be more generic and reusable for multiple applications of neural nets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Baixa grande fault is located on the edge of the S-SW Potiguar Rift. It limits the south part of Umbuzeiro Graben and the Apodi Graben. Although a number of studies have associated the complex deformation styles in the hanging wall of the Baixa Grande Fault with geometry and displacement variations, none have applied the modern computational techniques such as geometrical and kinematic validations to address this problem. This work proposes a geometric analysis of the Baixa Fault using seismic interpretation. The interpretation was made on 3D seismic data of the Baixa Grande fault using the software OpendTect (dGB Earth Sciences). It was also used direct structural modeling, such as Analog Direct Modeling know as Folding Vectors and, 2D and 3D Direct Computational Modeling. The Folding Vectors Modeling presented great similarity with the conventional structural seismic interpretations of the Baixa Grande Fault, thus, the conventional interpretation was validated geometrically. The 2D direct computational modeling was made on some sections of the 3D data of the Baixa Grande Fault on software Move (Midland Valley Ltd) using the horizon modeling tool. The modeling confirms the influence of fault geometry on the hanging wall. The Baixa Grande Fault ramp-flat-ramp geometry generates synform on the concave segments of the fault and antiform in the convex segments. On the fault region that does not have segments angle change, the beds are dislocated without deformation, and on the listric faults occur rollover. On the direct 3D computational modeling, structural attributes were obtained as horizons on the hanging wall of the main fault, after the simulation of several levels of deformation along the fault. The occurrence of structures that indicates shortening in this modeling, also indicates that the antiforms on the Baixa Grande Fault were influenced by fault geometry

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciências da Motricidade - IBRC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)