18 resultados para Sequence stratigraphy. Reservoir characterization. Isochores maps. Facies maps

em Universidad Politécnica de Madrid


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Self-OrganizingMap (SOM) is a neural network model that performs an ordered projection of a high dimensional input space in a low-dimensional topological structure. The process in which such mapping is formed is defined by the SOM algorithm, which is a competitive, unsupervised and nonparametric method, since it does not make any assumption about the input data distribution. The feature maps provided by this algorithm have been successfully applied for vector quantization, clustering and high dimensional data visualization processes. However, the initialization of the network topology and the selection of the SOM training parameters are two difficult tasks caused by the unknown distribution of the input signals. A misconfiguration of these parameters can generate a feature map of low-quality, so it is necessary to have some measure of the degree of adaptation of the SOM network to the input data model. The topologypreservation is the most common concept used to implement this measure. Several qualitative and quantitative methods have been proposed for measuring the degree of SOM topologypreservation, particularly using Kohonen's model. In this work, two methods for measuring the topologypreservation of the Growing Cell Structures (GCSs) model are proposed: the topographic function and the topology preserving map

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The adaptation to the European Higher Education Area (EHEA) is becoming a great challenge for the University Community, especially for its teaching and research staff, which is involved actively in the teaching-learning process. It is also inducing a paradigm change for lecturers and students. Among the methodologies used for processes of teaching innovation, system thinking plays an important role when working mainly with mind maps, and is focused to highlighting the essence of the knowledge, allowing its visual representation. In this paper, a method for using these mind maps for organizing a particular subject is explained. This organization is completed with the definition of duration, precedence relationships and resources for each of these activities, as well as with their corresponding monitoring. Mind maps are generated by means of the MINDMANAGER package whilst Ms-PROJECT is used for establishing tasks relationships, durations, resources, and monitoring. Summarizing, a procedure and the necessary set of applications for self organizing and managing (timed) scheduled teaching tasks has been described in this paper.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The adaptation to the European Higher Education Area (EHEA) is becoming a great challenge for the University Community, especially for its teaching and research staff, which is involved actively in the teaching-learning process. It is also inducing a paradigm change for lecturers and students. Among the methodologies used for processes of teaching innovation, system thinking plays an important role when working mainly with mind maps, and is focused to highlighting the essence of the knowledge, allowing its visual representation. In this paper, a method for using these mind maps for organizing a particular subject is explained. This organization is completed with the definition of duration, precedence relationships and resources for each of these activities, as well as with their corresponding monitoring. Mind maps are generated by means of the MINDMANAGER package whilst Ms-PROJECT is used for establishing tasks relationships, durations, resources, and monitoring. Summarizing, a procedure and the necessary set of applications for self organizing and managing (timed) scheduled teaching tasks has been described in this paper

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we consider a scenario where 3D scenes are modeled through a View+Depth representation. This representation is to be used at the rendering side to generate synthetic views for free viewpoint video. The encoding of both type of data (view and depth) is carried out using two H.264/AVC encoders. In this scenario we address the reduction of the encoding complexity of depth data. Firstly, an analysis of the Mode Decision and Motion Estimation processes has been conducted for both view and depth sequences, in order to capture the correlation between them. Taking advantage of this correlation, we propose a fast mode decision and motion estimation algorithm for the depth encoding. Results show that the proposed algorithm reduces the computational burden with a negligible loss in terms of quality of the rendered synthetic views. Quality measurements have been conducted using the Video Quality Metric.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The elemental distribution of as-received (non-charged) and charged Li-ion battery positive electrodes containing LixNi0.8Co0.15Al0.05O2 (0.75 ? x ? 1.0) microparticles as active material is characterized by combining μ-PIXE and μ-PIGE techniques. PIGE measurements evidence that the Li distribution is inhomogeneous (existence of Li-rich and Li-depleted regions) in as-received electrodes corresponding with the distribution of secondary particles but it is homogeneous within the studied individual secondary micro-particles. The dependence of the Li distribution on electrode thickness and on charging conditions is characterized by measuring the Li distribution maps in specifically fabricated cross-sectional samples. These data show that decreasing the electrode thickness down to 35 μm and charging the batteries at slow rate give rise to more homogeneous Li depth profiles.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Physico-chemical and organoleptic characteristics of food depend largely on the microscopic level distribution of gases and water, and connectivity and mobility through the pores. Microstructural characterization of food can be accomplished by Magnetic Resonance Imaging (MRI) and Nuclear Magnetic Spectroscopy (NMR) combined with the application of methods of dissemination and multidimensional relaxometry. In this work, funded by the EC Project InsideFood, several artificial food models, based on foams and gels were studied using MRI and 2D relaxometry. Two different kinds of foams were used: a sugarless and a sugar foam. Then, a half of a syringe was filled with the sugarless foam and the other half with the sugar foam. Then, MRI and NMR experiments were performed and the sample evolution was observed along 3 days in order to quantify macrostructural changes through proton density images and microstructural ones using T1T2 maps, using an inversion CPMG sequence. On the proton density images it may be seen that after 16 hours it was possible to differentiate the macrostructural changes, as the apparition of free water due to a syneresis phenomenon. On the interface it can be seen a brighter area after 16 hours, due to the occurrence of free water. Moreover, thanks to the bidimensional relaxometry (T1-T2) it was possible to differentiate among microscopic changes. Differences between the pores size can be observed as well as the microstructure evolution after 30.5 hours, as a consequence differences are shown on free water redistribution through larger pores and capillarity phenomena between both foams.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Se proponen novedosas fórmulas para evaluar la certeza de la cartografía

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose a new Bayesian framework for automatically determining the position (location and orientation) of an uncalibrated camera using the observations of moving objects and a schematic map of the passable areas of the environment. Our approach takes advantage of static and dynamic information on the scene structures through prior probability distributions for object dynamics. The proposed approach restricts plausible positions where the sensor can be located while taking into account the inherent ambiguity of the given setting. The proposed framework samples from the posterior probability distribution for the camera position via data driven MCMC, guided by an initial geometric analysis that restricts the search space. A Kullback-Leibler divergence analysis is then used that yields the final camera position estimate, while explicitly isolating ambiguous settings. The proposed approach is evaluated in synthetic and real environments, showing its satisfactory performance in both ambiguous and unambiguous settings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the context of aerial imagery, one of the first steps toward a coherent processing of the information contained in multiple images is geo-registration, which consists in assigning geographic 3D coordinates to the pixels of the image. This enables accurate alignment and geo-positioning of multiple images, detection of moving objects and fusion of data acquired from multiple sensors. To solve this problem there are different approaches that require, in addition to a precise characterization of the camera sensor, high resolution referenced images or terrain elevation models, which are usually not publicly available or out of date. Building upon the idea of developing technology that does not need a reference terrain elevation model, we propose a geo-registration technique that applies variational methods to obtain a dense and coherent surface elevation model that is used to replace the reference model. The surface elevation model is built by interpolation of scattered 3D points, which are obtained in a two-step process following a classical stereo pipeline: first, coherent disparity maps between image pairs of a video sequence are estimated and then image point correspondences are back-projected. The proposed variational method enforces continuity of the disparity map not only along epipolar lines (as done by previous geo-registration techniques) but also across them, in the full 2D image domain. In the experiments, aerial images from synthetic video sequences have been used to validate the proposed technique.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It has been demonstrated that rating trust and reputation of individual nodes is an effective approach in distributed environments in order to improve security, support decision-making and promote node collaboration. Nevertheless, these systems are vulnerable to deliberate false or unfair testimonies. In one scenario, the attackers collude to give negative feedback on the victim in order to lower or destroy its reputation. This attack is known as bad mouthing attack. In another scenario, a number of entities agree to give positive feedback on an entity (often with adversarial intentions). This attack is known as ballot stuffing. Both attack types can significantly deteriorate the performances of the network. The existing solutions for coping with these attacks are mainly concentrated on prevention techniques. In this work, we propose a solution that detects and isolates the abovementioned attackers, impeding them in this way to further spread their malicious activity. The approach is based on detecting outliers using clustering, in this case self-organizing maps. An important advantage of this approach is that we have no restrictions on training data, and thus there is no need for any data pre-processing. Testing results demonstrate the capability of the approach in detecting both bad mouthing and ballot stuffing attack in various scenarios.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this work is to provide the necessary methods to register and fuse the endo-epicardial signal intensity (SI) maps extracted from contrast-enhanced magnetic resonance imaging (ceMRI) with X-ray coronary ngiograms using an intrinsic registrationbased algorithm to help pre-planning and guidance of catheterization procedures. Fusion of angiograms with SI maps was treated as a 2D-3D pose estimation, where each image point is projected to a Plücker line, and the screw representation for rigid motions is minimized using a gradient descent method. The resultant transformation is applied to the SI map that is then projected and fused on each angiogram. The proposed method was tested in clinical datasets from 6 patients with prior myocardial infarction. The registration procedure is optionally combined with an iterative closest point algorithm (ICP) that aligns the ventricular contours segmented from two ventriculograms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we present an adaptive spatio-temporal filter that aims to improve low-cost depth camera accuracy and stability over time. The proposed system is composed by three blocks that are used to build a reliable depth map of static scenes. An adaptive joint-bilateral filter is used to obtain consistent depth maps by jointly considering depth and video information and by adapting its parameters to different levels of estimated noise. Kalman filters are used to reduce the temporal random fluctuations of the measurements. Finally an interpolation algorithm is used to obtain consistent depth maps in the regions where the depth information is not available. Results show that this approach allows to considerably improve the depth maps quality by considering spatio-temporal information and by adapting its parameters to different levels of noise.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we present an efficient hole filling strategy that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a joint-bilateral filtering framework that includes spatial and temporal information. The missing depth values are obtained applying iteratively a joint-bilateral filter to their neighbor pixels. The filter weights are selected considering three different factors: visual data, depth information and a temporal-consistency map. Video and depth data are combined to improve depth map quality in presence of edges and homogeneous regions. Finally, the temporal-consistency map is generated in order to track the reliability of the depth measurements near the hole regions. The obtained depth values are included iteratively in the filtering process of the successive frames and the accuracy of the hole regions depth values increases while new samples are acquired and filtered

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose a new method to automatically refine a facial disparity map obtained with standard cameras and under conventional illumination conditions by using a smart combination of traditional computer vision and 3D graphics techniques. Our system inputs two stereo images acquired with standard (calibrated) cameras and uses dense disparity estimation strategies to obtain a coarse initial disparity map, and SIFT to detect and match several feature points in the subjects face. We then use these points as anchors to modify the disparity in the facial area by building a Delaunay triangulation of their convex hull and interpolating their disparity values inside each triangle. We thus obtain a refined disparity map providing a much more accurate representation of the the subjects facial features. This refined facial disparity map may be easily transformed, through the camera calibration parameters, into a depth map to be used, also automatically, to improve the facial mesh of a 3D avatar to match the subjects real human features.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we provide a method that allows the visualization of similarity relationships present between items of collaborative filtering recommender systems, as well as the relative importance of each of these. The objective is to offer visual representations of the recommender system?s set of items and of their relationships; these graphs show us where the most representative information can be found and which items are rated in a more similar way by the recommender system?s community of users. The visual representations achieved take the shape of phylogenetic trees, displaying the numerical similarity and the reliability between each pair of items considered to be similar. As a case study we provide the results obtained using the public database Movielens 1M, which contains 3900 movies.