962 resultados para Pattern-matching technique


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pattern of illumination on an undulating surface can be used to infer its 3-D form (shape from shading). But the recovery of shape would be invalid if the shading actually arose from reflectance variation. When a corrugated surface is painted with an albedo texture, the variation in local mean luminance (LM) due to shading is accompanied by a similar modulation in texture amplitude (AM). This is not so for reflectance variation, nor for roughly textured surfaces. We used a haptic matching technique to show that modulations of texture amplitude play a role in the interpretation of shape from shading. Observers were shown plaid stimuli comprising LM and AM combined in-phase (LM+AM) on one oblique and in anti-phase (LM-AM) on the other. Stimuli were presented via a modified ReachIN workstation allowing the co-registration of visual and haptic stimuli. In the first experiment, observers were asked to adjust the phase of a haptic surface, which had the same orientation as the LM+AM combination, until its peak in depth aligned with the visually perceived peak. The resulting alignments were consistent with the use of a lighting-from-above prior. In the second experiment, observers were asked to adjust the amplitude of the haptic surface to match that of the visually perceived surface. Observers chose relatively large amplitude settings when the haptic surface was oriented and phase-aligned with the LM+AM cue. When the haptic surface was aligned with the LM-AM cue, amplitude settings were close to zero. Thus the LM/AM phase relation is a significant visual depth cue, and is used to discriminate between shading and reflectance variations. [Supported by the Engineering and Physical Sciences Research Council, EPSRC].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many Object recognition techniques perform some flavour of point pattern matching between a model and a scene. Such points are usually selected through a feature detection algorithm that is robust to a class of image transformations and a suitable descriptor is computed over them in order to get a reliable matching. Moreover, some approaches take an additional step by casting the correspondence problem into a matching between graphs defined over feature points. The motivation is that the relational model would add more discriminative power, however the overall effectiveness strongly depends on the ability to build a graph that is stable with respect to both changes in the object appearance and spatial distribution of interest points. In fact, widely used graph-based representations, have shown to suffer some limitations, especially with respect to changes in the Euclidean organization of the feature points. In this paper we introduce a technique to build relational structures over corner points that does not depend on the spatial distribution of the features. © 2012 ICPR Org Committee.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Precession electron diffraction (PED) is a hollow cone non-stationary illumination technique for electron diffraction pattern collection under quasikinematicalconditions (as in X-ray Diffraction), which enables “ab-initio” solving of crystalline structures of nanocrystals. The PED technique is recently used in TEMinstruments of voltages 100 to 300 kV to turn them into true electron iffractometers, thus enabling electron crystallography. The PED technique, when combined with fast electron diffraction acquisition and pattern matching software techniques, may also be used for the high magnification ultra-fast mapping of variable crystal orientations and phases, similarly to what is achieved with the Electron Backscatter Diffraction (EBSD) technique in Scanning ElectronMicroscopes (SEM) at lower magnifications and longer acquisition times.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pedicle screw insertion technique has made revolution in the surgical treatment of spinal fractures and spinal disorders. Although X- ray fluoroscopy based navigation is popular, there is risk of prolonged exposure to X- ray radiation. Systems that have lower radiation risk are generally quite expensive. The position and orientation of the drill is clinically very important in pedicle screw fixation. In this paper, the position and orientation of the marker on the drill is determined using pattern recognition based methods, using geometric features, obtained from the input video sequence taken from CCD camera. A search is then performed on the video frames after preprocessing, to obtain the exact position and orientation of the drill. An animated graphics, showing the instantaneous position and orientation of the drill is then overlaid on the processed video for real time drill control and navigation

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stochastic Diffusion Search is an efficient probabilistic bestfit search technique, capable of transformation invariant pattern matching. Although inherently parallel in operation it is difficult to implement efficiently in hardware as it requires full inter-agent connectivity. This paper describes a lattice implementation, which, while qualitatively retaining the properties of the original algorithm, restricts connectivity, enabling simpler implementation on parallel hardware. Diffusion times are examined for different network topologies, ranging from ordered lattices, over small-world networks to random graphs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A negociação é a ferramenta mais eficiente de conseguir algo que se deseja, ela ocorre quando existem conflitos e alternativas a serem selecionadas que podem envolver toda a empresa. As diversas alternativas apresentam interesses comuns e conflitantes, expressando a complexidade das relações. Com a crescente demanda por agilidade na resposta aos novos perfis de procura, as organizações precisam ser mais versáteis nos processos e mais rápidas para reagir às mudanças do mercado, e as Negociações de Ganhos Mútuos (NGM) são uma forma atual de condução de criação de valor. Esta dissertação tem como objetivo propor a utilização das ferramentas da teoria de NGM como instrumento apoiador aos gestores de compras públicas adquirentes de produtos e serviços de TI a atingir os resultados esperados. Para isso, foi realizada uma pesquisa de natureza descritiva-exploratória por meio de abordagem qualitativa. Para atingir este objetivo, foram feitos estudos teóricos das temáticas: Flexibilidade Organizacional, Gestão de Compras Públicas, Sistemas de Informação, Alinhamento Estratégico e Teoria da Negociação, de forma a se obter um melhor entendimento da pesquisa. Foi elaborado um questionário não estruturado, sendo este o instrumento de pesquisa que foi utilizado neste estudo. Aplicado o questionário diretamente aos participantes, obteve-se uma participação de 10 respondentes, sendo todos participantes dos processos licitatórios na empresa pública Delta. Uma vez efetuada a coleta de dados, foram analisadas as respostas utilizando uma modalidade da técnica de análise de conteúdo, chamada pattern-matching, com o propósito de comparar os resultados com o referencial teórico utilizado no estudo. Como resultado do estudo, identificou-se a utilização da abordagem distributiva nos processos licitatórios.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Ciência e Tecnologia de Materiais - FC

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Toba eruption that occurred some 74 ka ago in Sumatra, Indonesia, is among the largest volcanic events on Earth over the last 2 million years. Tephra from this eruption has been spread over vast areas in Asia, where it constitutes a major time marker close to the Marine Isotope Stage 4/5 boundary. As yet, no tephra associated with Toba has been identified in Greenland or Antarctic ice cores. Based on new accurate dating of Toba tephra and on accurately dated European stalagmites, the Toba event is known to occur between the onsets of Greenland interstadials (GI) 19 and 20. Furthermore, the existing linking of Greenland and Antarctic ice cores by gas records and by the bipolar seesaw hypothesis suggests that the Antarctic counterpart is situated between Antarctic Isotope Maxima (AIM) 19 and 20. In this work we suggest a direct synchronization of Greenland (NGRIP) and Antarctic (EDML) ice cores at the Toba eruption based on matching of a pattern of bipolar volcanic spikes. Annual layer counting between volcanic spikes in both cores allows for a unique match. We first demonstrate this bipolar matching technique at the already synchronized Laschamp geomagnetic excursion (41 ka BP) before we apply it to the suggested Toba interval. The Toba synchronization pattern covers some 2000 yr in GI-20 and AIM-19/20 and includes nine acidity peaks that are recognized in both ice cores. The suggested bipolar Toba synchronization has decadal precision. It thus allows a determination of the exact phasing of inter-hemispheric climate in a time interval of poorly constrained ice core records, and it allows for a discussion of the climatic impact of the Toba eruption in a global perspective. The bipolar linking gives no support for a long-term global cooling caused by the Toba eruption as Antarctica experiences a major warming shortly after the event. Furthermore, our bipolar match provides a way to place palaeo-environmental records other than ice cores into a precise climatic context.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on the development of an artificial neural network (ANN) method to detect laminar defects following the pattern matching approach utilizing dynamic measurement. Although structural health monitoring (SHM) using ANN has attracted much attention in the last decade, the problem of how to select the optimal class of ANN models has not been investigated in great depth. It turns out that the lack of a rigorous ANN design methodology is one of the main reasons for the delay in the successful application of the promising technique in SHM. In this paper, a Bayesian method is applied in the selection of the optimal class of ANN models for a given set of input/target training data. The ANN design method is demonstrated for the case of the detection and characterisation of laminar defects in carbon fibre-reinforced beams using flexural vibration data for beams with and without non-symmetric delamination damage.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To introduce a new technique for co-registration of Magnetoencephalography (MEG) with magnetic resonance imaging (MRI). We compare the accuracy of a new bite-bar with fixed fiducials to a previous technique whereby fiducial coils were attached proximal to landmarks on the skull. Methods: A bite-bar with fixed fiducial coils is used to determine the position of the head in the MEG co-ordinate system. Co-registration is performed by a surface-matching technique. The advantage of fixing the coils is that the co-ordinate system is not based upon arbitrary and operator dependent fiducial points that are attached to landmarks (e.g. nasion and the preauricular points), but rather on those that are permanently fixed in relation to the skull. Results: As a consequence of minimizing coil movement during digitization, errors in localization of the coils are significantly reduced, as shown by a randomization test. Displacement of the bite-bar caused by removal and repositioning between MEG recordings is minimal (∼0.5 mm), and dipole localization accuracy of a somatosensory mapping paradigm shows a repeatability of ∼5 mm. The overall accuracy of the new procedure is greatly improved compared to the previous technique. Conclusions: The test-retest reliability and accuracy of target localization with the new design is superior to techniques that incorporate anatomical-based fiducial points or coils placed on the circumference of the head. © 2003 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this work was to investigate human contrast perception at various contrast levels ranging from detection threshold to suprathreshold levels by using psychophysical techniques. The work consists of two major parts. The first part deals with contrast matching, and the second part deals with contrast discrimination. Contrast matching technique was used to determine when the perceived contrasts of different stimuli were equal. The effects of spatial frequency, stimulus area, image complexity and chromatic contrast on contrast detection thresholds and matches were studied. These factors influenced detection thresholds and perceived contrast at low contrast levels. However, at suprathreshold contrast levels perceived contrast became directly proportional to the physical contrast of the stimulus and almost independent of factors affecting detection thresholds. Contrast discrimination was studied by measuring contrast increment thresholds which indicate the smallest detectable contrast difference. The effects of stimulus area, external spatial image noise and retinal illuminance were studied. The above factors affected contrast detection thresholds and increment thresholds measured at low contrast levels. At high contrast levels, contrast increment thresholds became very similar so that the effect of these factors decreased. Human contrast perception was modelled by regarding the visual system as a simple image processing system. A visual signal is first low-pass filtered by the ocular optics. This is followed by spatial high-pass filtering by the neural visual pathways, and addition of internal neural noise. Detection is mediated by a local matched filter which is a weighted replica of the stimulus whose sampling efficiency decreases with increasing stimulus area and complexity. According to the model, the signals to be compared in a contrast matching task are first transferred through the early image processing stages mentioned above. Then they are filtered by a restoring transfer function which compensates for the low-level filtering and limited spatial integration at high contrast levels. Perceived contrasts of the stimuli are equal when the restored responses to the stimuli are equal. According to the model, the signals to be discriminated in a contrast discrimination task first go through the early image processing stages, after which signal dependent noise is added to the matched filter responses. The decision made by the human brain is based on the comparison between the responses of the matched filters to the stimuli, and the accuracy of the decision is limited by pre- and post-filter noises. The model for human contrast perception could accurately describe the results of contrast matching and discrimination in various conditions.