976 resultados para image set


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel multi-label classification framework for domains with large numbers of labels. Automatic image annotation is such a domain, as the available semantic concepts are typically hundreds. The proposed framework comprises an initial clustering phase that breaks the original training set into several disjoint clusters of data. It then trains a multi-label classifier from the data of each cluster. Given a new test instance, the framework first finds the nearest cluster and then applies the corresponding model. Empirical results using two clustering algorithms, four multi-label classification algorithms and three image annotation data sets suggest that the proposed approach can improve the performance and reduce the training time of standard multi-label classification algorithms, particularly in the case of large number of labels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water and Nitrogen (N) are critical inputs for crop production. Remote sensing data collected from multiple scales, including ground-based, aerial, and satellite, can be used for the formulation of an efficient and cost effective algorithm for the detection of N and water stress. Formulation and validation of such techniques require continuous acquisition of ground based spectral data over the canopy enabling field measurements to coincide exactly with aerial and satellite observations. In this context, a wireless sensor in situ network was developed and this paper describes the results of the first phase of the experiment along with the details of sensor development and instrumentation set up. The sensor network was established based on different spatial sampling strategies and each sensor collected spectral data in seven narrow wavebands (470, 550, 670, 700, 720, 750, 790 nm) critical for monitoring crop growth. Spectral measurements recorded at required intervals (up to 30 seconds) were relayed through a multi-hop wireless network to a base computer at the field site. These data were then accessed by the remote sensing centre computing system through broad band internet. Comparison of the data from the WSN and an industry standard ground based hyperspectral radiometer indicated that there were no significant differences in the spectral measurements for all the wavebands except for 790nm. Combining sensor and wireless technologies provides a robust means of aerial and satellite data calibration and an enhanced understanding of issues of variations in the scale for the effective water and nutrient management in wheat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Content based image retrieval (CBIR) is a technique to search for images relevant to the user’s query from an image collection.In last decade, most attention has been paid to improve the retrieval performance. However, there is no significant effort to investigate the security concerning in CBIR. Under the query by example (QBE) paradigm, the user supplies an image as a query and the system returns a set of retrieved results. If the query image includes user’s private information, an untrusted server provider of CBIR may distribute it illegally, which leads to the user’s right problem. In this paper, we propose an interactive watermarking protocol to address this problem. A watermark is inserted into the query image by the user in encrypted domain without knowing the exact content. The server provider of CBIR will get the watermarked query image and uses it to perform image retrieval. In case where the user finds an unauthorized copy, a watermark in the unauthorized copy will be used as evidence to prove that the user’s legal right is infringed by the server provider.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational Intelligence (CI) models comprise robust computing methodologies with a high level of machine learning quotient. CI models, in general, are useful for designing computerized intelligent systems/machines that possess useful characteristics mimicking human behaviors and capabilities in solving complex tasks, e.g., learning, adaptation, and evolution. Examples of some popular CI models include fuzzy systems, artificial neural networks, evolutionary algorithms, multi-agent systems, decision trees, rough set theory, knowledge-based systems, and hybrid of these models. This special issue highlights how different computational intelligence models, coupled with other complementary techniques, can be used to handle problems encountered in image processing and information reasoning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building on a habitat mapping project completed in 2011, Deakin University was commissioned by Parks Victoria (PV) to apply the same methodology and ground-truth data to a second, more recent and higher resolution satellite image to create habitat maps for areas within the Corner Inlet and Nooramunga Marine and Coastal Park and Ramsar area. A ground-truth data set using in situ video and still photographs was used to develop and assess predictive models of benthic marine habitat distributions incorporating data from both RapidEye satellite imagery (corrected for atmospheric and water column effects by CSIRO) and LiDAR (Light Detection and Ranging) bathymetry. This report describes the results of the mapping effort as well as the methodology used to produce these habitat maps.

Overall accuracies of habitat classifications were good, with error rates similar to or better than the earlier classification (>73 % and kappa values > 0.58 for both study localities). The RapidEye classification failed to accurately detect Pyura and reef habitat classes at the Corner Inlet locality, possibly due to differences in spectral frequencies. For comparison, these categories were combined into a ‘non-seagrass’ category, similar to the one used at the Nooramunga locality in the original classification. Habitats predicted with highest accuracies differed from the earlier classification and were Posidonia in Corner Inlet (89%), and bare sediment (no-visible seagrass class) in Nooramunga (90%). In the Corner Inlet locality reef and Pyura habitat categories were not distinguishable in the repeated classification and so were combined with bare sediments. The majority of remaining classification errors were due to the misclassification of Zosteraceae as bare sediment and vice versa. Dominant habitats were the same as those from the 2011 classification with some differences in extent. For the Corner Inlet study locality the no-visible seagrass category remained the most extensive (9059 ha), followed by Posidonia (5,513 ha) and Zosteraceae (5,504 ha). In Nooramunga no-visible seagrass (6,294 ha), Zosteraceae (3,122 ha) and wet saltmarsh (1,562 ha) habitat classes were most dominant.

Change detection analyses between the 2009 and 2011 imagery were undertaken as part of this project, following the analyses presented in Monk et al. (2011) and incorporating error estimates from both classifications. These analyses indicated some shifts in classification between Posidonia and Zosteraceae as well as a general reduction in the area of Zosteraceae. Issues with classification of mixed beds were apparent, particularly in the main Posidonia bed at Nooramunga where a mosaic of Zosteraceae and Posidonia was seen that was not evident in the ALOS classification. Results of a reanalysis of the 1998-2009 change detection illustrating effects of binning of mixed beds is also provided as an appendix.

This work has been successful in providing baseline maps at an improved level of detail using a repeatable method meaning that any future changes in intertidal and shallow water marine habitats may be assessed in a consistent way with quantitative error assessments. In wider use, these maps should also allow improved conservation planning, advance fisheries and catchment management, and progress infrastructure planning to limit impacts on the Inlet environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The self-quotient image is a biologically inspired representation which has been proposed as an illumination invariant feature for automatic face recognition. Owing to the lack of strong domain specific assumptions underlying this representation, it can be readily extracted from raw images irrespective of the persons's pose, facial expression etc. What makes the self-quotient image additionally attractive is that it can be computed quickly and in a closed form using simple low-level image operations. However, it is generally accepted that the self-quotient is insufficiently robust to large illumination changes which is why it is mainly used in applications in which low precision is an acceptable compromise for high recall (e.g. retrieval systems). Yet, in this paper we demonstrate that the performance of this representation in challenging illuminations has been greatly underestimated. We show that its error rate can be reduced by over an order of magnitude, without any changes to the representation itself. Rather, we focus on the manner in which the dissimilarity between two self-quotient images is computed. By modelling the dominant sources of noise affecting the representation, we propose and evaluate a series of different dissimilarity measures, the best of which reduces the initial error rate of 63.0% down to only 5.7% on the notoriously challenging YaleB data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the course of the last decade, infrared (IR) and particularly thermal IR imaging based face recognition has emerged as a promising complement to conventional, visible spectrum based approaches which continue to struggle when applied in practice. While inherently insensitive to visible spectrum illumination changes, IR data introduces specific challenges of its own, most notably sensitivity to factors which affect facial heat emission patterns, e.g. emotional state, ambient temperature, and alcohol intake. In addition, facial expression and pose changes are more difficult to correct in IR images because they are less rich in high frequency detail which is an important cue for fitting any deformable model. In this paper we describe a novel method which addresses these major challenges. Specifically, when comparing two thermal IR images of faces, we mutually normalize their poses and facial expressions by using an active appearance model (AAM) to generate synthetic images of the two faces with a neutral facial expression and in the same view (the average of the two input views). This is achieved by piecewise affine warping which follows AAM fitting. A major contribution of our work is the use of an AAM ensemble in which each AAM is specialized to a particular range of poses and a particular region of the thermal IR face space. Combined with the contributions from our previous work which addressed the problem of reliable AAM fitting in the thermal IR spectrum, and the development of a person-specific representation robust to transient changes in the pattern of facial temperature emissions, the proposed ensemble framework accurately matches faces across the full range of yaw from frontal to profile, even in the presence of scale variation (e.g. due to the varying distance of a subject from the camera). The effectiveness of the proposed approach is demonstrated on the largest public database of thermal IR images of faces and a newly acquired data set of thermal IR motion videos. Our approach achieved perfect recognition performance on both data sets, significantly outperforming the current state of the art methods even when they are trained with multiple images spanning a range of head views.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many automatic face recognition applications, a set of a person's face images is available rather than a single image. In this paper, we describe a novel method for face recognition using image sets. We propose a flexible, semi-parametric model for learning probability densities confined to highly non-linear but intrinsically low-dimensional manifolds. The model leads to a statistical formulation of the recognition problem in terms of minimizing the divergence between densities estimated on these manifolds. The proposed method is evaluated on a large data set, acquired in realistic imaging conditions with severe illumination variation. Our algorithm is shown to match the best and outperform other state-of-the-art algorithms in the literature, achieving 94% recognition rate on average.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fact that medical images have redundant information is exploited by researchers for faster image acquisition. Sample set or number of measurements were reduced in order to achieve rapid imaging. However, due to inadequate sampling, noise artefacts are inevitable in Compressive Sensing (CS) MRI. CS utilizes the transform sparsity of MR images to regenerate images from under sampled data. Locally sparsified Compressed Sensing is an extension of simple CS. It localises sparsity constraints for sub-regions rather than using a global constraint. This paper, presents a framework to use local CS for improving image quality without increasing sampling rate or without making the acquisition process any slower. This was achieved by exploiting local constraints. Localising image into independent sub-regions allows different sampling rates within image. Energy distribution of MR images is not even and most of noise occurs due to under-sampling in high energy regions. By sampling sub-regions based on energy distribution, noise artefacts can be minimized. Experiments were done using the proposed technique. Results were compared with global CS and summarized in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Segmentation is the process of extraction of objects from an image. This paper proposes a new algorithm to construct intuitionistic fuzzy set (IFS) from multiple fuzzy sets as an application to image segmentation. Hesitation degree in IFS is formulated as the degree of ignorance (due to the lack of knowledge) to determine whether the chosen membership function is best for image segmentation. By minimizing entropy of IFS generated from various fuzzy sets, an image is thresholded. Experimental results are provided to show the effectiveness of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the task of time-separated aerial image registration. The ability to solve this problem accurately and reliably is important for a variety of subsequent image understanding applications. The principal challenge lies in the extent and nature of transient appearance variation that a land area can undergo, such as that caused by the change under illumination conditions, seasonal variations, or the occlusion by non-persistent objects (people, cars). Our work introduces several major novelties (i) unlike previous work on aerial image registration, we approach the problem using a set-based paradigm; (ii) we show how image space local, pair-wise constraints can be used to enforce a globally good registration using a constraints graph structure; (iii) we show how a simple holistic representation derived from raw aerial images can be used as a basic building block of the constraints graph in a manner which achieves both high registration accuracy and speed; (iv) lastly, we introduce a new and, to the best of our knowledge, the only data corpus suitable for the evaluation of set-based aerial image registration algorithms. Using this data set, we demonstrate (i) that the proposed method outperforms the state-of-the-art for pair-wise registration already, achieving greater accuracy and reliability, while at the same time reducing the computational cost of the task and (ii) that the increase in the number of available images in a set consistently reduces the average registration error, with a major difference already for a single additional image.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tests on printed circuit boards and integrated circuits are widely used in industry,resulting in reduced design time and cost of a project. The functional and connectivity tests in this type of circuits soon began to be a concern for the manufacturers, leading to research for solutions that would allow a reliable, quick, cheap and universal solution. Initially, using test schemes were based on a set of needles that was connected to inputs and outputs of the integrated circuit board (bed-of-nails), to which signals were applied, in order to verify whether the circuit was according to the specifications and could be assembled in the production line. With the development of projects, circuit miniaturization, improvement of the production processes, improvement of the materials used, as well as the increase in the number of circuits, it was necessary to search for another solution. Thus Boundary-Scan Testing was developed which operates on the border of integrated circuits and allows testing the connectivity of the input and the output ports of a circuit. The Boundary-Scan Testing method was converted into a standard, in 1990, by the IEEE organization, being known as the IEEE 1149.1 Standard. Since then a large number of manufacturers have adopted this standard in their products. This master thesis has, as main objective: the design of Boundary-Scan Testing in an image sensor in CMOS technology, analyzing the standard requirements, the process used in the prototype production, developing the design and layout of Boundary-Scan and analyzing obtained results after production. Chapter 1 presents briefly the evolution of testing procedures used in industry, developments and applications of image sensors and the motivation for the use of architecture Boundary-Scan Testing. Chapter 2 explores the fundamentals of Boundary-Scan Testing and image sensors, starting with the Boundary-Scan architecture defined in the Standard, where functional blocks are analyzed. This understanding is necessary to implement the design on an image sensor. It also explains the architecture of image sensors currently used, focusing on sensors with a large number of inputs and outputs.Chapter 3 describes the design of the Boundary-Scan implemented and starts to analyse the design and functions of the prototype, the used software, the designs and simulations of the functional blocks of the Boundary-Scan implemented. Chapter 4 presents the layout process used based on the design developed on chapter 3, describing the software used for this purpose, the planning of the layout location (floorplan) and its dimensions, the layout of individual blocks, checks in terms of layout rules, the comparison with the final design and finally the simulation. Chapter 5 describes how the functional tests were performed to verify the design compliancy with the specifications of Standard IEEE 1149.1. These tests were focused on the application of signals to input and output ports of the produced prototype. Chapter 6 presents the conclusions that were taken throughout the execution of the work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.