848 resultados para Content Based Image Retrieval (CBIR)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper is a report about the FuXML project carried out at the FernUniversität Hagen. FuXML is a Learning Content Management System (LCMS) aimed at providing a practical and efficient solution for the issues attributed to authoring, maintenance, production and distribution of online and offline distance learning material. The paper presents the environment for which the system was conceived and describes the technical realisation. We discuss the reasons for specific implementation decisions and also address the integration of the system within the organisational and technical infrastructure of the university.
Resumo:
The shift from host-centric to information-centric networking (ICN) promises seamless communication in mobile networks. However, most existing works either consider well-connected networks with high node density or introduce modifications to {ICN} message processing for delay-tolerant Networking (DTN). In this work, we present agent-based content retrieval, which provides information-centric {DTN} support as an application module without modifications to {ICN} message processing. This enables flexible interoperability in changing environments. If no content source can be found via wireless multi-hop routing, requesters may exploit the mobility of neighbor nodes (called agents) by delegating content retrieval to them. Agents that receive a delegation and move closer to content sources can retrieve data and return it back to requesters. We show that agent-based content retrieval may be even more efficient in scenarios where multi-hop communication is possible. Furthermore, we show that broadcast communication may not be necessarily the best option since dynamic unicast requests have little overhead and can better exploit short contact times between nodes (no broadcast delays required for duplicate suppression).
Resumo:
The main problem to study vertical drainage from the moisture distribution, on a vertisol profile, is searching for suitable methods using these procedures. Our aim was to design a digital image processing methodology and its analysis to characterize the moisture content distribution of a vertisol profile. In this research, twelve soil pits were excavated on a ba re Mazic Pellic Vertisols ix of them in May 13/2011 and the rest in May 19 /2011 after a moderate rainfall event. Digital RGB images were taken from each vertisol pit using a Kodak? camera selecting a size of 1600x945 pixels. Each soil image was processed to homogenized brightness and then a spatial filter with several window sizes was applied to select the optimum one. The RGB image obtained were divided in each matrix color selecting the best thresholds for each one, maximum and minimum, to be applied and get a digital binary pattern. This one was analyzed by estimating two fractal scaling exponents box counting dimension D BC) and interface fractal dimension (D) In addition, three pre-fractal scaling coefficients were determinate at maximum resolution: total number of boxes intercepting the foreground pattern (A), fractal lacunarity (?1) and Shannon entropy S1). For all the images processed the spatial filter 9x9 was the optimum based on entropy, cluster and histogram criteria. Thresholds for each color were selected based on bimodal histograms.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
More and more researchers have realized that ontologies will play a critical role in the development of the Semantic Web, the next generation Web in which content is not only consumable by humans, but also by software agents. The development of tools to support ontology management including creation, visualization, annotation, database storage, and retrieval is thus extremely important. We have developed ImageSpace, an image ontology creation and annotation tool that features (1) full support for the standard web ontology language DAML+OIL; (2) image ontology creation, visualization, image annotation and display in one integrated framework; (3) ontology consistency assurance; and (4) storing ontologies and annotations in relational databases. It is expected that the availability of such a tool will greatly facilitate the creation of image repositories as islands of the Semantic Web.
Resumo:
Image database visualisations, in particular mapping-based visualisations, provide an interesting approach to accessing image repositories as they are able to overcome some of the drawbacks associated with retrieval based approaches. However, making a mapping-based approach work efficiently on large remote image databases, has yet to be explored. In this paper, we present Web-Based Images Browser (WBIB), a novel system that efficiently employs image pyramids to reduce bandwidth requirements so that users can interactively explore large remote image databases. © 2013 Authors.
Resumo:
A circumpolar representative and consistent wetland map is required for a range of applications ranging from upscaling of carbon fluxes and pools to climate modelling and wildlife habitat assessment. Currently available data sets lack sufficient accuracy and/or thematic detail in many regions of the Arctic. Synthetic aperture radar (SAR) data from satellites have already been shown to be suitable for wetland mapping. Envisat Advanced SAR (ASAR) provides global medium-resolution data which are examined with particular focus on spatial wetness patterns in this study. It was found that winter minimum backscatter values as well as their differences to summer minimum values reflect vegetation physiognomy units of certain wetness regimes. Low winter backscatter values are mostly found in areas vegetated by plant communities typically for wet regions in the tundra biome, due to low roughness and low volume scattering caused by the predominant vegetation. Summer to winter difference backscatter values, which in contrast to the winter values depend almost solely on soil moisture content, show expected higher values for wet regions. While the approach using difference values would seem more reasonable in order to delineate wetness patterns considering its direct link to soil moisture, it was found that a classification of winter minimum backscatter values is more applicable in tundra regions due to its better separability into wetness classes. Previous approaches for wetland detection have investigated the impact of liquid water in the soil on backscatter conditions. In this study the absence of liquid water is utilized. Owing to a lack of comparable regional to circumpolar data with respect to thematic detail, a potential wetland map cannot directly be validated; however, one might claim the validity of such a product by comparison with vegetation maps, which hold some information on the wetness status of certain classes. It was shown that the Envisat ASAR-derived classes are related to wetland classes of conventional vegetation maps, indicating its applicability; 30% of the land area north of the treeline was identified as wetland while conventional maps recorded 1-7%.
Resumo:
Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state of the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state of the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.
Resumo:
The study of the atmospheric chemical composition is crucial to understand the climate changes that we are experiencing in the last decades and to monitor the air quality over industrialized areas. The Multi-AXis Differential Optical Absorption Spectroscopy (MAX-DOAS) ground-based instruments are particularly suitable to derive the concentration of some trace gases that absorb the Visible (VIS) and Ultra-Violet (UV) solar radiation. The zenith-sky spectra acquired by the Gas Analyzer Spectrometer Correlating Optical Differences / New Generation 4 (GASCOD/NG4) instrument are exploited to retrieve the NO2 and O3 total Vertical Column Densities (VCDs) over Lecce. The results show that the NO2 total VCDs are significantly affected by the tropospheric content, consequence of the anthropogenic activity. Indeed, they present systematically lower values during Sunday, when less traffic is generally present around the measurement site, and during windy days, especially when the wind direction measured at 2 m height is not from the city of Lecce. Another MAX-DOAS instrument (SkySpec-2D) is exploited to create the first Italian MAX-DOAS site compliant to the Fiducial Reference Measurements for DOAS (FRM4DOAS) standards, in San Pietro Capofiume (SPC), located in the middle of the Po Valley. After the assessment of the SkySpec-2D’s performances through two measurement campaigns taken place in Bologna and in Rome, SkySpec-2D is installed in SPC on the 1st October 2021. Its MAX-DOAS spectra are used to retrieve the NO2 and O3 total VCDs, and aerosol extinction and NO2 tropospheric vertical profiles over the Po Valley exploiting the Bremen Optimal estimation REtrieval for Aerosol and trace gaseS (BOREAS) algorithm. Promising results are found, with high correlations against both in-situ and satellite data. In the future, these data will play an important role for air quality studies over the Po Valley and for satellite validation purposes.
Resumo:
Artificial Intelligence is reshaping the field of fashion industry in different ways. E-commerce retailers exploit their data through AI to enhance their search engines, make outfit suggestions and forecast the success of a specific fashion product. However, it is a challenging endeavour as the data they possess is huge, complex and multi-modal. The most common way to search for fashion products online is by matching keywords with phrases in the product's description which are often cluttered, inadequate and differ across collections and sellers. A customer may also browse an online store's taxonomy, although this is time-consuming and doesn't guarantee relevant items. With the advent of Deep Learning architectures, particularly Vision-Language models, ad-hoc solutions have been proposed to model both the product image and description to solve this problems. However, the suggested solutions do not exploit effectively the semantic or syntactic information of these modalities, and the unique qualities and relations of clothing items. In this work of thesis, a novel approach is proposed to address this issues, which aims to model and process images and text descriptions as graphs in order to exploit the relations inside and between each modality and employs specific techniques to extract syntactic and semantic information. The results obtained show promising performances on different tasks when compared to the present state-of-the-art deep learning architectures.
Resumo:
The present work reports the porous alumina structures fabrication and their quantitative structural characteristics study based on mathematical morphology analysis by using the SEM images. The algorithm used in this work was implemented in 6.2 MATLAB software. Using the algorithm it was possible to obtain the distribution of maximum, minimum and average radius of the pores in porous alumina structures. Additionally, with the calculus of the area occupied by the pores, it was possible to obtain the porosity of the structures. The quantitative results could be obtained and related to the process fabrication characteristics, showing to be reliable and promising to be used to control the pores formation process. Then, this technique could provide a more accurate determination of pore sizes and pores distribution. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Prospective memory (ProM) is the memory for future actions. It requires retrieving content of anaction in response to an ambiguous cue. Currently, it is unclear if ProM is a distinct form of memory, or merely a variant of retrospective memory (RetM). While content retrieval in ProM appears analogous to conventional RetM, less is known about the process of cue detection. Using a modified version of the standard ProM paradigm, three experiments manipulated stimulus characteristics known to influence RetM, in order to examine their effects on ProM performance. Experiment 1 (N — 80) demonstrated that low frequency stimuli elicited significantly higher hit rates and lower false alarm rates than high frequency stimuli, comparable to the mirror effect in RetM. Experiment 2 (N = 80) replicated these results, and showed that repetition of distracters during the test phase significantly increased false alarm rates to second and subsequent presentations of low frequency distracters. Building on these results. Experiment 3 (AT = 40) showed that when the study list was strengthened, the repeated presentation of targets and distracters did not significantly affect response rates. These experiments demonstrate more overlap between ProM and RetM than has previously been acknowledged. The implications for theories of ProM are considered.
Resumo:
In this paper, we propose a method based on association rule-mining to enhance the diagnosis of medical images (mammograms). It combines low-level features automatically extracted from images and high-level knowledge from specialists to search for patterns. Our method analyzes medical images and automatically generates suggestions of diagnoses employing mining of association rules. The suggestions of diagnosis are used to accelerate the image analysis performed by specialists as well as to provide them an alternative to work on. The proposed method uses two new algorithms, PreSAGe and HiCARe. The PreSAGe algorithm combines, in a single step, feature selection and discretization, and reduces the mining complexity. Experiments performed on PreSAGe show that this algorithm is highly suitable to perform feature selection and discretization in medical images. HiCARe is a new associative classifier. The HiCARe algorithm has an important property that makes it unique: it assigns multiple keywords per image to suggest a diagnosis with high values of accuracy. Our method was applied to real datasets, and the results show high sensitivity (up to 95%) and accuracy (up to 92%), allowing us to claim that the use of association rules is a powerful means to assist in the diagnosing task.