962 resultados para Search image


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for a submerged jet ski and the lost limb of its driver involved in a collision with a speedboat was made in a shallow lake in Northern Ireland. The location of both was crucial to establishing events at the time of the accident. Local intelligence suggested both objects were likely to be partially-buried by lacustrine silt. To avoid sediment churning, this required non-invasive, completely non-destructive assessment and mapping of the scene. A MALA RAMAC ground-penetrating radar system (GPR) mounted on floats for surveying from walkways and jetties or placed in a small rubber dinghy for offshore profiling was used. A grid was established and each line surveyed with 100, 200 and 400MHz antennae. In waters over 6m deep GPR data showed the form of the lake floor but excessive ringing occurred in the data. In waters less than 6m deep ringing diminished on both 100 and 200MHz data, the latter displaying the best trade-off between depth penetration and horizontal object resolution. 400MHz data failed to be of use in waters over 2m deep and at these depths showed only limited improvement of image quality compared to 200MHz data. Surface objects such as a wooden walkway caused interference on 200 and 400MHz data when antennae were oriented both normal and parallel to survey direction; this may be a function of the low attenuation of radar waves in freshwater, allowing excellent lateral and vertical radar wave penetration. On 200MHz data the damaged jet-ski was clearly imaged in a location that contradicted the speedboat driver's account of the accident.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Some 8000 images obtained with the Solar Eclipse Coronal Imaging System (SECIS) fast-frame CCD camera instrument located at Lusaka, Zambia, during the total eclipse of 21 June 2001 have been analysed to search for short-period oscillations in intensity that could be a signature of solar coronal heating mechanisms by MHD wave dissipation. Images were taken in white-light and Fe xiv green-line (5303 ) channels over 205 seconds (frame rate 39 s(-1)), approximately the length of eclipse totality at this location, with a pixel size of four arcseconds square. The data are of considerably better quality than those that we obtained during the 11 August 1999 total eclipse (Rudawy et al.: Astron. Astrophys. 416, 1179, 2004), in that the images are much better exposed and enhancements in the drive system of the heliostat used gave a much improved image stability. Classical Fourier and wavelet techniques have been used to analyse the emission at 29 518 locations, of which 10 714 had emission at reasonably high levels, searching for periodic fluctuations with periods in the range 0.1 -aEuro parts per thousand 17 seconds (frequencies 0.06 -aEuro parts per thousand 10 Hz). While a number of possible periodicities were apparent in the wavelet analysis, none of the spatially and time-limited periodicities in the local brightness curves was found to be physically important. This implies that the pervasive Alfv,n wave-like phenomena (Tomczyk et al.: Science 317, 1192, 2007) using polarimetric observations with the Coronal Multi-Channel Polarimeter (CoMP) instrument do not give rise to significant oscillatory intensity fluctuations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Images of the site of the Type Ic supernova (SN) 2002ap taken before explosion were analysed previously by Smartt et al. We have uncovered new unpublished, archival pre-explosion images from the Canada-France-Hawaii Telescope (CFHT) that are vastly superior in depth and image quality. In this paper we present a further search for the progenitor star of this unusual Type Ic SN. Aligning high-resolution Hubble Space Telescope observations of the SN itself with the archival CFHT images allowed us to pinpoint the location of the progenitor site on the groundbased observations. We find that a source visible in the B- and R-band pre-explosion images close to the position of the SN is (1) not coincident with the SN position within the uncertainties of our relative astrometry and (2) is still visible similar to 4.7-yr post-explosion in late-time observations taken with the William Herschel Telescope. We therefore conclude that it is not the progenitor of SN 2002ap. We derived absolute limiting magnitudes for the progenitor of M-B >= -4.2 +/- 0.5 and M-R >= -5.1 +/- 0.5. These are the deepest limits yet placed on a Type Ic SN progenitor. We rule out all massive stars with initial masses greater than 7-8 M-circle dot (the lower mass limit for stars to undergo core collapse) that have not evolved to become Wolf-Rayet stars. This is consistent with the prediction that Type Ic SNe should result from the explosions of Wolf-Rayet stars. Comparing our luminosity limits with stellar models of single stars at appropriate metallicity (Z = 0.008) and with standard mass-loss rates, we find no model that produces a Wolf-Rayet star of low enough mass and luminosity to be classed as a viable progenitor. Models with twice the standard mass-loss rates provide possible single star progenitors but all are initially more massive than 30-40 M-circle dot. We conclude that any single star progenitor must have experienced at least twice the standard mass-loss rates, been initially more massive than 30-40 M-circle dot and exploded as a Wolf-Rayet star of final mass 10-12 M-circle dot. Alternatively a progenitor star of lower initial mass may have evolved in an interacting binary system. Mazzali et al. propose such a binary scenario for the progenitor of SN 2002ap in which a star of initial mass 15-20 M-circle dot is stripped by its binary companion, becoming a 5 M-circle dot Wolf-Rayet star prior to explosion. We constrain any possible binary companion to a main-sequence star of

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A bit-level systolic array system for performing a binary tree Vector Quantization codebook search is described. This consists of a linear chain of regular VLSI building blocks and exhibits data rates suitable for a wide range of real-time applications. A technique is described which reduces the computation required at each node in the binary tree to that of a single inner product operation. This method applies to all the common distortion measures (including the Euclidean distance, the Weighted Euclidean distance and the Itakura-Saito distortion measure) and significantly reduces the hardware required to implement the tree search system. © 1990 Kluwer Academic Publishers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A technique is presented for locating and tracking objects in cluttered environments. Agents are randomly distributed across the image, and subsequently grouped around targets. Each agent uses a weightless neural network and a histogram intersection technique to score its location. The system has been used to locate and track a head in 320x240 resolution video at up to 15fps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel two-pass algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS). compensation. for block base motion On the basis of research from previous algorithms, especially an on-the-edge motion estimation algorithm called hexagonal search (HEXBS), we propose the LHMEA and the Two-Pass Algorithm (TPA). We introduce hashtable into video compression. In this paper we employ LHMEA for the first-pass search in all the Macroblocks (MB) in the picture. Motion Vectors (MV) are then generated from the first-pass and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of MBs. The evaluation of the algorithm considers the three important metrics being time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms. Experimental results show that the proposed algorithm can offer the same compression rate as the Full Search. LHMEA with TPA has significant improvement on HEXBS and shows a direction for improving other fast motion estimation algorithms, for example Diamond Search.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The challenge of moving past the classic Window Icons Menus Pointer (WIMP) interface, i.e. by turning it ‘3D’, has resulted in much research and development. To evaluate the impact of 3D on the ‘finding a target picture in a folder’ task, we built a 3D WIMP interface that allowed the systematic manipulation of visual depth, visual aides, semantic category distribution of targets versus non-targets; and the detailed measurement of lower-level stimuli features. Across two separate experiments, one large sample web-based experiment, to understand associations, and one controlled lab environment, using eye tracking to understand user focus, we investigated how visual depth, use of visual aides, use of semantic categories, and lower-level stimuli features (i.e. contrast, colour and luminance) impact how successfully participants are able to search for, and detect, the target image. Moreover in the lab-based experiment, we captured pupillometry measurements to allow consideration of the influence of increasing cognitive load as a result of either an increasing number of items on the screen, or due to the inclusion of visual depth. Our findings showed that increasing the visible layers of depth, and inclusion of converging lines, did not impact target detection times, errors, or failure rates. Low-level features, including colour, luminance, and number of edges, did correlate with differences in target detection times, errors, and failure rates. Our results also revealed that semantic sorting algorithms significantly decreased target detection times. Increased semantic contrasts between a target and its neighbours correlated with an increase in detection errors. Finally, pupillometric data did not provide evidence of any correlation between the number of visible layers of depth and pupil size, however, using structural equation modelling, we demonstrated that cognitive load does influence detection failure rates when there is luminance contrasts between the target and its surrounding neighbours. Results suggest that WIMP interaction designers should consider stimulus-driven factors, which were shown to influence the efficiency with which a target icon can be found in a 3D WIMP interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tests on printed circuit boards and integrated circuits are widely used in industry,resulting in reduced design time and cost of a project. The functional and connectivity tests in this type of circuits soon began to be a concern for the manufacturers, leading to research for solutions that would allow a reliable, quick, cheap and universal solution. Initially, using test schemes were based on a set of needles that was connected to inputs and outputs of the integrated circuit board (bed-of-nails), to which signals were applied, in order to verify whether the circuit was according to the specifications and could be assembled in the production line. With the development of projects, circuit miniaturization, improvement of the production processes, improvement of the materials used, as well as the increase in the number of circuits, it was necessary to search for another solution. Thus Boundary-Scan Testing was developed which operates on the border of integrated circuits and allows testing the connectivity of the input and the output ports of a circuit. The Boundary-Scan Testing method was converted into a standard, in 1990, by the IEEE organization, being known as the IEEE 1149.1 Standard. Since then a large number of manufacturers have adopted this standard in their products. This master thesis has, as main objective: the design of Boundary-Scan Testing in an image sensor in CMOS technology, analyzing the standard requirements, the process used in the prototype production, developing the design and layout of Boundary-Scan and analyzing obtained results after production. Chapter 1 presents briefly the evolution of testing procedures used in industry, developments and applications of image sensors and the motivation for the use of architecture Boundary-Scan Testing. Chapter 2 explores the fundamentals of Boundary-Scan Testing and image sensors, starting with the Boundary-Scan architecture defined in the Standard, where functional blocks are analyzed. This understanding is necessary to implement the design on an image sensor. It also explains the architecture of image sensors currently used, focusing on sensors with a large number of inputs and outputs.Chapter 3 describes the design of the Boundary-Scan implemented and starts to analyse the design and functions of the prototype, the used software, the designs and simulations of the functional blocks of the Boundary-Scan implemented. Chapter 4 presents the layout process used based on the design developed on chapter 3, describing the software used for this purpose, the planning of the layout location (floorplan) and its dimensions, the layout of individual blocks, checks in terms of layout rules, the comparison with the final design and finally the simulation. Chapter 5 describes how the functional tests were performed to verify the design compliancy with the specifications of Standard IEEE 1149.1. These tests were focused on the application of signals to input and output ports of the produced prototype. Chapter 6 presents the conclusions that were taken throughout the execution of the work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we deal with the problem of feature selection by introducing a new approach based on Gravitational Search Algorithm (GSA). The proposed algorithm combines the optimization behavior of GSA together with the speed of Optimum-Path Forest (OPF) classifier in order to provide a fast and accurate framework for feature selection. Experiments on datasets obtained from a wide range of applications, such as vowel recognition, image classification and fraud detection in power distribution systems are conducted in order to asses the robustness of the proposed technique against Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and a Particle Swarm Optimization (PSO)-based algorithm for feature selection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The daily-to-day of medical practice is marked by a constant search for an accurate diagnosis and therapeutic assessment. For this purpose the doctor serves up a wide variety of imaging techniques, however, the methods using ionizing radiation still the most widely used because it is considered cheaper and above all very efficient when used with control and quality. The optimization of the risk-benefit ratio is considered a major breakthrough in relation to conventional radiology, though this is not the reality of computing and digital radiology, where Brazil has not established standards and protocols for this purpose. This work aims to optimize computational chest radiographs (anterior-posterior projection-AP). To achieve this objective were used a homogeneous phantoms that simulate the characteristics of absorption and scattering of radiation close to the chest of a patient standard. Another factor studied was the subjective evaluation of image quality, carried out by visual grading assessment (VGA) by specialists in radiology, using an anthropomorphic phantom to identify the best image for a particular pathology (fracture or pneumonia). Quantifying the corresponding images indicated by the radiologist was performed from the quantification of physical parameters (Detective Quantum Efficiency - DQE, Modulation Transfer Function - MTF and Noise Power Spectrum - NPS) using the software MatLab®. © 2013 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature selection aims to find the most important information from a given set of features. As this task can be seen as an optimization problem, the combinatorial growth of the possible solutions may be inviable for a exhaustive search. In this paper we propose a new nature-inspired feature selection technique based on the Charged System Search (CSS), which has never been applied to this context so far. The wrapper approach combines the power of exploration of CSS together with the speed of the Optimum-Path Forest classifier to find the set of features that maximizes the accuracy in a validating set. Experiments conducted in four public datasets have demonstrated the validity of the proposed approach can outperform some well-known swarm-based techniques. © 2013 Springer-Verlag.