33 resultados para Text-Based Image Retrieval

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emergence of cloud datacenters enhances the capability of online data storage. Since massive data is stored in datacenters, it is necessary to effectively locate and access interest data in such a distributed system. However, traditional search techniques only allow users to search images over exact-match keywords through a centralized index. These techniques cannot satisfy the requirements of content based image retrieval (CBIR). In this paper, we propose a scalable image retrieval framework which can efficiently support content similarity search and semantic search in the distributed environment. Its key idea is to integrate image feature vectors into distributed hash tables (DHTs) by exploiting the property of locality sensitive hashing (LSH). Thus, images with similar content are most likely gathered into the same node without the knowledge of any global information. For searching semantically close images, the relevance feedback is adopted in our system to overcome the gap between low-level features and high-level features. We show that our approach yields high recall rate with good load balance and only requires a few number of hops.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Specialized search engines such as PubMed, MedScape or Cochrane have increased dramatically the visibility of biomedical scientific results. These web-based tools allow physicians to access scientific papers instantly. However, this decisive improvement had not a proportional impact in clinical practice due to the lack of advanced search methods. Even queries highly specified for a concrete pathology frequently retrieve too many information, with publications related to patients treated by the physician beyond the scope of the results examined. In this work we present a new method to improve scientific article search using patient information. Two pathologies have been used within the project to retrieve relevant literature to patient data and to be integrated with other sources. Promising results suggest the suitability of the approach, highlighting publications dealing with patient features and facilitating literature search to physicians.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ImageCLEF is a pilot experiment run at CLEF 2003 for cross language image retrieval using textual captions related to image contents. In this paper, we describe the participation of the MIRACLE research team (Multilingual Information RetrievAl at CLEF), detailing the different experiments and discussing their preliminary results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clasificación de una imagen de alta resolución "Quickbird" con la técnica de análisis de imágenes en base a objetos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clasificación de una imagen de alta resolución "Quickbird" con la técnica de análisis de imágenes en base a objetos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-view microscopy techniques such as Light-Sheet Fluorescence Microscopy (LSFM) are powerful tools for 3D + time studies of live embryos in developmental biology. The sample is imaged from several points of view, acquiring a set of 3D views that are then combined or fused in order to overcome their individual limitations. Views fusion is still an open problem despite recent contributions in the field. We developed a wavelet-based multi-view fusion method that, due to wavelet decomposition properties, is able to combine the complementary directional information from all available views into a single volume. Our method is demonstrated on LSFM acquisitions from live sea urchin and zebrafish embryos. The fusion results show improved overall contrast and details when compared with any of the acquired volumes. The proposed method does not need knowledge of the system's point spread function (PSF) and performs better than other existing PSF independent fusion methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moment invariants have been thoroughly studied and repeatedly proposed as one of the most powerful tools for 2D shape identification. In this paper a set of such descriptors is proposed, being the basis functions discontinuous in a finite number of points. The goal of using discontinuous functions is to avoid the Gibbs phenomenon, and therefore to yield a better approximation capability for discontinuous signals, as images. Moreover, the proposed set of moments allows the definition of rotation invariants, being this the other main design concern. Translation and scale invariance are achieved by means of standard image normalization. Tests are conducted to evaluate the behavior of these descriptors in noisy environments, where images are corrupted with Gaussian noise up to different SNR values. Results are compared to those obtained using Zernike moments, showing that the proposed descriptor has the same performance in image retrieval tasks in noisy environments, but demanding much less computational power for every stage in the query chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, an architecture based on a scalable and flexible set of Evolvable Processing arrays is presented. FPGA-native Dynamic Partial Reconfiguration (DPR) is used for evolution, which is done intrinsically, letting the system to adapt autonomously to variable run-time conditions, including the presence of transient and permanent faults. The architecture supports different modes of operation, namely: independent, parallel, cascaded or bypass mode. These modes of operation can be used during evolution time or during normal operation. The evolvability of the architecture is combined with fault-tolerance techniques, to enhance the platform with self-healing features, making it suitable for applications which require both high adaptability and reliability. Experimental results show that such a system may benefit from accelerated evolution times, increased performance and improved dependability, mainly by increasing fault tolerance for transient and permanent faults, as well as providing some fault identification possibilities. The evolvable HW array shown is tailored for window-based image processing applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, Object Based Image Analysis (OBIA) has been accepted as an effective method for processing high spatial resolution multiband images. This image analysis method is an approach that starts with the segmentation of the image. Image segmentation in general is a procedure to partition an image into homogenous groups (segments). In practice, visual interpretation is often used to assess the quality of segmentation and the analysis relies on the experience of an analyst. In an effort to address the issue, in this study, we evaluate several seed selection strategies for an automatic image segmentation methodology based on a seeded region growing-merging approach. In order to evaluate the segmentation quality, segments were subjected to spatial autocorrelation analysis using Moran's I index and intra-segment variance analysis. We apply the algorithm to image segmentation using an aerial multiband image.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El estudio de materiales, especialmente biológicos, por medios no destructivos está adquiriendo una importancia creciente tanto en las aplicaciones científicas como industriales. Las ventajas económicas de los métodos no destructivos son múltiples. Existen numerosos procedimientos físicos capaces de extraer información detallada de las superficie de la madera con escaso o nulo tratamiento previo y mínima intrusión en el material. Entre los diversos métodos destacan las técnicas ópticas y las acústicas por su gran versatilidad, relativa sencillez y bajo coste. Esta tesis pretende establecer desde la aplicación de principios simples de física, de medición directa y superficial, a través del desarrollo de los algoritmos de decisión mas adecuados basados en la estadística, unas soluciones tecnológicas simples y en esencia, de coste mínimo, para su posible aplicación en la determinación de la especie y los defectos superficiales de la madera de cada muestra tratando, en la medida de lo posible, no alterar su geometría de trabajo. Los análisis desarrollados han sido los tres siguientes: El primer método óptico utiliza las propiedades de la luz dispersada por la superficie de la madera cuando es iluminada por un laser difuso. Esta dispersión produce un moteado luminoso (speckle) cuyas propiedades estadísticas permiten extraer propiedades muy precisas de la estructura tanto microscópica como macroscópica de la madera. El análisis de las propiedades espectrales de la luz laser dispersada genera ciertos patrones mas o menos regulares relacionados con la estructura anatómica, composición, procesado y textura superficial de la madera bajo estudio que ponen de manifiesto características del material o de la calidad de los procesos a los que ha sido sometido. El uso de este tipo de láseres implica también la posibilidad de realizar monitorizaciones de procesos industriales en tiempo real y a distancia sin interferir con otros sensores. La segunda técnica óptica que emplearemos hace uso del estudio estadístico y matemático de las propiedades de las imágenes digitales obtenidas de la superficie de la madera a través de un sistema de scanner de alta resolución. Después de aislar los detalles mas relevantes de las imágenes, diversos algoritmos de clasificacion automatica se encargan de generar bases de datos con las diversas especies de maderas a las que pertenecían las imágenes, junto con los márgenes de error de tales clasificaciones. Una parte fundamental de las herramientas de clasificacion se basa en el estudio preciso de las bandas de color de las diversas maderas. Finalmente, numerosas técnicas acústicas, tales como el análisis de pulsos por impacto acústico, permiten complementar y afinar los resultados obtenidos con los métodos ópticos descritos, identificando estructuras superficiales y profundas en la madera así como patologías o deformaciones, aspectos de especial utilidad en usos de la madera en estructuras. La utilidad de estas técnicas esta mas que demostrada en el campo industrial aun cuando su aplicación carece de la suficiente expansión debido a sus altos costes y falta de normalización de los procesos, lo cual hace que cada análisis no sea comparable con su teórico equivalente de mercado. En la actualidad gran parte de los esfuerzos de investigación tienden a dar por supuesto que la diferenciación entre especies es un mecanismo de reconocimiento propio del ser humano y concentran las tecnologías en la definición de parámetros físicos (módulos de elasticidad, conductividad eléctrica o acústica, etc.), utilizando aparatos muy costosos y en muchos casos complejos en su aplicación de campo. Abstract The study of materials, especially the biological ones, by non-destructive techniques is becoming increasingly important in both scientific and industrial applications. The economic advantages of non-destructive methods are multiple and clear due to the related costs and resources necessaries. There are many physical processes capable of extracting detailed information on the wood surface with little or no previous treatment and minimal intrusion into the material. Among the various methods stand out acoustic and optical techniques for their great versatility, relative simplicity and low cost. This thesis aims to establish from the application of simple principles of physics, surface direct measurement and through the development of the more appropriate decision algorithms based on statistics, a simple technological solutions with the minimum cost for possible application in determining the species and the wood surface defects of each sample. Looking for a reasonable accuracy without altering their work-location or properties is the main objetive. There are three different work lines: Empirical characterization of wood surfaces by means of iterative autocorrelation of laser speckle patterns: A simple and inexpensive method for the qualitative characterization of wood surfaces is presented. it is based on the iterative autocorrelation of laser speckle patterns produced by diffuse laser illumination of the wood surfaces. The method exploits the high spatial frequency content of speckle images. A similar approach with raw conventional photographs taken with ordinary light would be very difficult. A few iterations of the algorithm are necessary, typically three or four, in order to visualize the most important periodic features of the surface. The processed patterns help in the study of surface parameters, to design new scattering models and to classify the wood species. Fractal-based image enhancement techniques inspired by differential interference contrast microscopy: Differential interference contrast microscopy is a very powerful optical technique for microscopic imaging. Inspired by the physics of this type of microscope, we have developed a series of image processing algorithms aimed at the magnification, noise reduction, contrast enhancement and tissue analysis of biological samples. These algorithms use fractal convolution schemes which provide fast and accurate results with a performance comparable to the best present image enhancement algorithms. These techniques can be used as post processing tools for advanced microscopy or as a means to improve the performance of less expensive visualization instruments. Several examples of the use of these algorithms to visualize microscopic images of raw pine wood samples with a simple desktop scanner are provided. Wood species identification using stress-wave analysis in the audible range: Stress-wave analysis is a powerful and flexible technique to study mechanical properties of many materials. We present a simple technique to obtain information about the species of wood samples using stress-wave sounds in the audible range generated by collision with a small pendulum. Stress-wave analysis has been used for flaw detection and quality control for decades, but its use for material identification and classification is less cited in the literature. Accurate wood species identification is a time consuming task for highly trained human experts. For this reason, the development of cost effective techniques for automatic wood classification is a desirable goal. Our proposed approach is fully non-invasive and non-destructive, reducing significantly the cost and complexity of the identification and classification process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new methodology, simple and affordable, for the definition and characterization of objects at different scales in high spatial resolution images. The objects have been generated by integrating texturally and spectrally homogeneous segments. The former have been obtained from the segmentation of Wavelet coefficients of the panchromatic image. The multi-scale character of this transform has yielded texturally homogeneous segments of different sizes for each of the scales. The spectrally homogeneous segments have been obtained by segmenting the classified corresponding multispectral image. In this way, it has been defined a set of objects characterized by different attributes, which give to the objects a semantic meaning, allowing to determine the similarities and differences between them. To demonstrate the capabilities of the methodology proposed, different experiments of unsupervised classification of a Quickbird image have been carried out, using different subsets of attributes and 1-D ascendant hierarchical classifier. Obtained results have shown the capability of the proposed methodology for separating semantic objects at different scales, as well as, its advantages against pixel-based image interpretation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se establece un metodología para evaluar la cartografía de capas GIS

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se proponen novedosas fórmulas para evaluar la certeza de la cartografía

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La sequía afecta a todos los sectores de la sociedad y se espera que su frecuencia e intensidad aumente debido al cambio climático. Su gestión plantea importantes retos en el futuro. El enfoque de riesgo, que promueve una respuesta proactiva, se identifica como un marco de gestión apropiado que se está empezando a consolidar a nivel internacional. Sin embargo, es necesario contar con estudios sobre las características de la gestión de la sequía bajo este enfoque y sus implicaciones en la práctica. En esta tesis se evalúan diversos elementos que son relevantes para la gestión de la sequía, desde diferentes perspectivas, con especial énfasis en el componente social de la sequía. Para esta investigación se han desarrollado cinco estudios: (1) un análisis de las leyes de emergencia aprobadas durante la sequía 2005-2008 en España; (2) un estudio sobre la percepción de la sequía de los agricultores a nivel local; (3) una evaluación de las características y enfoque de gestión en seis casos de estudio a nivel europeo; (4) un análisis sistemático de los estudios de cuantificación de la vulnerabilidad a la sequía a nivel global; y (5) un análisis de los impactos de la sequía a partir en una base de datos europea. Los estudios muestran la importancia de la capacidad institucional como un factor que promueve y facilita la adopción del enfoque de riesgo. Al mismo tiempo, la falta de estudios de vulnerabilidad, el escaso conocimiento de los impactos y una escasa cultura de la evaluación post-sequía destacan como importantes limitantes para aprovechar el conocimiento que se genera en la gestión de un evento. A través del estudio de las leyes de sequía se evidencia la existencia de incoherencias entre cómo se define el problema de la sequía y las soluciones que se plantean, así como el uso de un discurso de securitización para perseguir objetivos más allá de la gestión de la sequía. El estudio de percepción permite identificar la existencia de diferentes problemas y percepciones de la sequía y muestra cómo los regantes utilizan principalmente los impactos para identificar y caracterizar la severidad de un evento, lo cual difiere de las definiciones predominantes a otros niveles de gestión. Esto evidencia la importancia de considerar la diversidad de definiciones y percepciones en la gestión, para realizar una gestión más ajustada a las necesidades de los diferentes sectores y colectivos. El análisis de la gestión de la sequía en seis casos de estudio a nivel europeo ha permitido identificar diferentes niveles de adopción del enfoque de riesgo en la práctica. El marco de análisis establecido, que se basa en seis dimensiones de análisis y 21 criterios, ha resultado ser una herramienta útil para diagnosticar los elementos que funcionan y los que es necesario mejorar en relación a la gestión del riesgo a la sequía. El análisis sistemático de los estudios de vulnerabilidad ha evidenciado la heterogeneidad en los marcos conceptuales utilizados así como debilidades en los factores de vulnerabilidad que se suelen incluir, en muchos casos derivada de la falta de datos. El trabajo sistemático de recolección de información sobre impactos de la sequía ha evidenciado la escasez de información sobre el tema a nivel europeo y la importancia de la gestión de la información. La base de datos de impactos desarrollada tiene un gran potencial como herramienta exploratoria y orientativa del tipo de impactos que produce la sequía en cada región, pero todavía presenta algunos retos respecto a su contenido, proceso de gestión y utilidad práctica. Existen importantes limitaciones vinculadas con el acceso y la disponibilidad de información y datos relevantes vinculados con la gestión de la sequía y todos sus componentes. La participación, los niveles de gestión, la perspectiva sectorial y las relaciones entre los componentes de gestión del riesgo considerados constituyen aspectos críticos que es necesario mejorar en el futuro. Así, los cinco artículos en su conjunto presentan ejemplos concretos que ayudan a conocer mejor la gestión de la sequía y que pueden resultar de utilidad para políticos, gestores y usuarios. ABSTRACT Drought affects all sectors and their frequency and intensity is expected to increase due to climate change. Drought management poses significant challenges in the future. Undertaking a drought risk management approach promotes a proactive response, and it is starting to consolidate internationally. However, it is still necessary to conduct studies on the characteristics of drought risk management and its practical implications. This thesis provides an evaluation of various relevant aspects of drought management from different perspectives and with special emphasis on the social component of droughts. For the purpose of this research a number of five studies have been carried out: (1) analysis of the emergency laws adopted during the 2005-2008 drought in Spain; (2) study of farmers perception of drought at a local level; (3) assessment of the characteristics and drought management issues in six case studies across Europe; (4) systematic analysis of drought vulnerability assessments; and (5) analysis of drought impacts from an European impacts text-based database. The results show the importance of institutional capacity as a factor that promotes and facilitates the adoption of a risk approach. In contrast, the following issues are identified as the main obstacles to take advantage of the lessons learnt: (1) lack of vulnerability studies, (2) limited knowledge about the impact and (3) limited availability of post-drought assessments Drought emergency laws evidence the existence of inconsistencies between drought problem definition and the measures proposed as solutions. Moreover, the securitization of the discourse pursue goals beyond management drought. The perception of drought by farmers helps to identify the existence of several definitions of drought. It also highlights the importance of impacts in defining and characterizing the severity of an event. However, this definition differs from the one used at other institutional and management level. As a conclusion, this remarks the importance of considering the diversity of definitions and perceptions to better tailor drought management to the needs of different sectors and stakeholders. The analysis of drought management in six case studies across Europe show different levels of risk adoption approach in practice. The analytical framework proposed is based on six dimensions and 21 criteria. This method has proven to be a useful tool in diagnosing the elements that work and those that need to be improved in relation to drought risk management. The systematic analysis of vulnerability assessment studies demonstrates the heterogeneity of the conceptual frameworks used. Driven by the lack of relevant data, the studies point out significant weaknesses of the vulnerabilities factors that are typically included The heterogeneity of the impact data collected at European level to build the European Drought Impact Reports Database (EDII) highlights the importance of information management. The database has great potential as exploratory tool and provides indicative useful information of the type of impacts that occurred in a particular region. However, it still presents some challenges regarding their content, the process of data collection and management and its usefulness. There are significant limitations associated with the access and availability of relevant information and data related to drought management and its components. The following improvement areas on critical aspects have been identified for the near future: participation, levels of drought management, sectorial perspective and in-depth assessment of the relationships between the components of drought risk management The five articles presented in this dissertation provides concrete examples of drought management evaluation that help to better understand drought management from a risk-based perspective which can be useful for policy makers, managers and users.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.