978 resultados para Edge detectors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the `signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper. We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the 'signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper.We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Edge blur is an important perceptual cue, but how does the visual system encode the degree of blur at edges? Blur could be measured by the width of the luminance gradient profile, peak ^ trough separation in the 2nd derivative profile, or the ratio of 1st-to-3rd derivative magnitudes. In template models, the system would store a set of templates of different sizes and find which one best fits the `signature' of the edge. The signature could be the luminance profile itself, or one of its spatial derivatives. I tested these possibilities in blur-matching experiments. In a 2AFC staircase procedure, observers adjusted the blur of Gaussian edges (30% contrast) to match the perceived blur of various non-Gaussian test edges. In experiment 1, test stimuli were mixtures of 2 Gaussian edges (eg 10 and 30 min of arc blur) at the same location, while in experiment 2, test stimuli were formed from a blurred edge sharpened to different extents by a compressive transformation. Predictions of the various models were tested against the blur-matching data, but only one model was strongly supported. This was the template model, in which the input signature is the 2nd derivative of the luminance profile, and the templates are applied to this signature at the zero-crossings. The templates are Gaussian derivative receptive fields that covary in width and length to form a self-similar set (ie same shape, different sizes). This naturally predicts that shorter edges should look sharper. As edge length gets shorter, responses of longer templates drop more than shorter ones, and so the response distribution shifts towards shorter (smaller) templates, signalling a sharper edge. The data confirmed this, including the scale-invariance implied by self-similarity, and a good fit was obtained from templates with a length-to-width ratio of about 1. The simultaneous analysis of edge blur and edge location may offer a new solution to the multiscale problem in edge detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is well known that optic flow - the smooth transformation of the retinal image experienced by a moving observer - contains valuable information about the three-dimensional layout of the environment. From psychophysical and neurophysiological experiments, specialised mechanisms responsive to components of optic flow (sometimes called complex motion) such as expansion and rotation have been inferred. However, it remains unclear (a) whether the visual system has mechanisms for processing the component of deformation and (b) whether there are multiple mechanisms that function independently from each other. Here, we investigate these issues using random-dot patterns and a forced-choice subthreshold summation technique. In experiment 1, we manipulated the size of a test region that was permitted to contain signal and found substantial spatial summation for signal components of translation, expansion, rotation, and deformation embedded in noise. In experiment 2, little or no summation was found for the superposition of orthogonal pairs of complex motion patterns (eg expansion and rotation), consistent with probability summation between pairs of independent detectors. Our results suggest that optic-flow components are detected by mechanisms that are specialised for particular patterns of complex motion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a study of how edges are detected and encoded by the human visual system. The study begins with theoretical work on the development of a model of edge processing, and includes psychophysical experiments on humans, and computer simulations of these experiments, using the model. The first chapter reviews the literature on edge processing in biological and machine vision, and introduces the mathematical foundations of this area of research. The second chapter gives a formal presentation of a model of edge perception that detects edges and characterizes their blur, contrast and orientation, using Gaussian derivative templates. This model has previously been shown to accurately predict human performance in blur matching tasks with several different types of edge profile. The model provides veridical estimates of the blur and contrast of edges that have a Gaussian integral profile. Since blur and contrast are independent parameters of Gaussian edges, the model predicts that varying one parameter should not affect perception of the other. Psychophysical experiments showed that this prediction is incorrect: reducing the contrast makes an edge look sharper; increasing the blur reduces the perceived contrast. Both of these effects can be explained by introducing a smoothed threshold to one of the processing stages of the model. It is shown that, with this modification,the model can predict the perceived contrast and blur of a number of edge profiles that differ markedly from the ideal Gaussian edge profiles on which the templates are based. With only a few exceptions, the results from all the experiments on blur and contrast perception can be explained reasonably well using one set of parameters for each subject. In the few cases where the model fails, possible extensions to the model are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To evaluate the influence of soft contact lens midperipheral shape profile and edge design on the apparent epithelial thickness and indentation of the ocular surface with lens movement. Methods. Four soft contact lens designs comprising of two different plano midperipheral shape profiles and two edge designs (chiseled and knife edge) of silicone-hydrogel material were examined in 26 subjects aged 24.7 ± 4.6 years, each worn bilaterally in randomized order. Lens movement was imaged enface on insertion, at 2 and 4 hours with a high-speed, high-resolution camera simultaneous to the cross-section of the edge of the contact lens interaction with the ocular surface captured using optical coherence tomography (OCT) nasally, temporally, and inferiorly. Optical imaging distortions were individually corrected for by imaging the apparent distortion of a glass slide surface by the removed lens. Results. Apparent epithelial thickness varied with edge position (P < 0.001). When distortion was corrected for, epithelial indentation decreased with time after insertion (P = 0.010), changed after a blink (P < 0.001), and varied with position on the lens edge (P < 0.001), with the latter being affected by midperipheral lens shape profile and edge design. Horizontal and vertical lens movement did not change with time postinsertion. Vertical motion was affected by midperipheral lens shape profile (P < 0.001) and edge design (P < 0.001). Lens movement was associated with physiologic epithelium thickness for lens midperipheral shape profile and edge designs. Conclusions. Dynamic OCT coupled with high-resolution video demonstrated that soft contact lens movement and image-corrected ocular surface indentation were influenced by both lens edge design and midperipheral lens shape profiles. © 2013 The Association for Research in Vision and Ophthalmology, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A view has emerged within manufacturing and service organizations that the operations management function can hold the key to achieving competitive edge. This has recently been emphasized by the demands for greater variety and higher quality which must be set against a background of increasing cost of resources. As nations' trade barriers are progressively lowered and removed, so producers of goods and service products are becoming more exposed to competition that may come from virtually anywhere around the world. To simply survive in this climate many organizations have found it necessary to improve their manufacturing or service delivery systems. To become real ''winners'' some have adopted a strategic approach to operations and completely reviewed and restructured their approach to production system design and operations planning and control. The articles in this issue of the International journal of Operations & Production Management have been selected to illustrate current thinking and practice in relation to this situation. They are all based on papers presented to the Sixth International Conference of the Operations Management Association-UK which was held at Aston University in June 1991. The theme of the conference was "Achieving Competitive Edge" and authors from 15 countries around the world contributed to more than 80 presented papers. Within this special issue five topic areas are addressed with two articles relating to each. The topics are: strategic management of operations; managing change; production system design; production control; and service operations. Under strategic management of operations De Toni, Filippini and Forza propose a conceptual model which considers the performance of an operating system as a source of competitive advantage through the ''operation value chain'' of design, purchasing, production and distribution. Their model is set within the context of the tendency towards globalization. New's article is somewhat in contrast to the more fashionable literature on operations strategy. It challenges the validity of the current idea of ''world-class manufacturing'' and, instead, urges a reconsideration of the view that strategic ''trade-offs'' are necessary to achieve a competitive edge. The importance of managing change has for some time been recognized within the field of organization studies but its relevance in operations management is now being realized. Berger considers the use of "organization design", ''sociotechnical systems'' and change strategies and contrasts these with the more recent idea of the ''dialogue perspective''. A tentative model is suggested to improve the analysis of different strategies in a situation specific context. Neely and Wilson look at an essential prerequisite if change is to be effected in an efficient way, namely product goal congruence. Using a case study as its basis, their article suggests a method of measuring goal congruence as a means of identifying the extent to which key performance criteria relating to quality, time, cost and flexibility are understood within an organization. The two articles on production systems design represent important contributions to the debate on flexible production organization and autonomous group working. Rosander uses the results from cases to test the applicability of ''flow groups'' as the optimal way of organizing batch production. Schuring also examines cases to determine the reasons behind the adoption of ''autonomous work groups'' in The Netherlands and Sweden. Both these contributions help to provide a greater understanding of the production philosophies which have emerged as alternatives to more conventional systems -------for intermittent and continuous production. The production control articles are both concerned with the concepts of ''push'' and ''pull'' which are the two broad approaches to material planning and control. Hirakawa, Hoshino and Katayama have developed a hybrid model, suitable for multistage manufacturing processes, which combines the benefits of both systems. They discuss the theoretical arguments in support of the system and illustrate its performance with numerical studies. Slack and Correa's concern is with the flexibility characteristics of push and pull material planning and control systems. They use the case of two plants using the different systems to compare their performance within a number of predefined flexibility types. The two final contributions on service operations are complementary. The article by Voss really relates to manufacturing but examines the application of service industry concepts within the UK manufacturing sector. His studies in a number of companies support the idea of the ''service factory'' and offer a new perspective for manufacturing. Harvey's contribution by contrast, is concerned with the application of operations management principles in the delivery of professional services. Using the case of social-service provision in Canada, it demonstrates how concepts such as ''just-in-time'' can be used to improve service performance. The ten articles in this special issue of the journal address a wide range of issues and situations. Their common aspect is that, together, they demonstrate the extent to which competitiveness can be improved via the application of operations management concepts and techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: Contrast sensitivity (CS) provides important information on visual function. This study aimed to assess differences in clinical expediency of the CS increment-matched new back-lit and original paper versions of the Melbourne Edge Test (MET) to determine the CS of the visually impaired. Methods: The back-lit and paper MET were administered to 75 visually impaired subjects (28-97 years). Two versions of the back-lit MET acetates were used to match the CS increments with the paper-based MET. Measures of CS were repeated after 30 min and again in the presence of a focal light source directed onto the MET. Visual acuity was measured with a Bailey-Lovie chart and subjects rated how much difficulty they had with face and vehicle recognition. Results: The back-lit MET gave a significantly higher CS than the paper-based version (14.2 ± 4.1 dB vs 11.3 ± 4.3 dB, p < 0.001). A significantly higher reading resulted with repetition of the paper-based MET (by 1.0 ± 1.7 dB, p < 0.001), but this was not evident with the back-lit MET (by 0.1 ± 1.4 dB, p = 0.53). The MET readings were increased by a focal light source, in both the back-lit (by 0.3 ± 0.81, p < 0.01) and paper-based (1.2 ± 1.7, p < 0.001) versions. CS as measured by the back-lit and paper-based versions of the MET was significantly correlated to patients' perceived ability to recognise faces (r = 0.71, r = 0.85 respectively; p < 0.001) and vehicles (r = 0.67, r = 0.82 respectively; p < 0.001), and with distance visual acuity (both r =-0.64; p < 0.001). Conclusions: The CS increment-matched back-lit MET gives higher CS values than the old paper-based test by approximately 3 dB and is more repeatable and less affected by external light sources. Clinically, the MET score provides information on patient difficulties with visual tasks, such as recognising faces. © 2005 The College of Optometrists.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The Melbourne Edge Test (MET) is a portable forced-choice edge detection contrast sensitivity (CS) test. The original externally illuminated paper test has been superseded by a backlit version. The aim of this study was to establish normative values for age and to assess change with visual impairment. Method: The MET was administered to 168 people with normal vision (18-93 years old) and 93 patients with visual impairment (39-97 years old). Distance visual acuity (VA) was measured with a log MAR chart. Results: In those eyes without disease, MET CS was stable until the age of 50 years (23.8 ± .7 dB) after which it decreased at a rate of ≈1.5 dB per decade. Compared with normative values, people with low vision were found to have significantly reduced CS, which could not be totally accounted for by reduced VA. Conclusions: The MET provides a quick and easy measure of CS, which highlights a reduction in visual function that may not be detectable using VA measurements. © 2004 The College of Optometrists.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a novel method for an application of digital image processing, Edge Detection is developed. The contemporary Fuzzy logic, a key concept of artificial intelligence helps to implement the fuzzy relative pixel value algorithms and helps to find and highlight all the edges associated with an image by checking the relative pixel values and thus provides an algorithm to abridge the concepts of digital image processing and artificial intelligence. Exhaustive scanning of an image using the windowing technique takes place which is subjected to a set of fuzzy conditions for the comparison of pixel values with adjacent pixels to check the pixel magnitude gradient in the window. After the testing of fuzzy conditions the appropriate values are allocated to the pixels in the window under testing to provide an image highlighted with all the associated edges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a new edge-based matching kernel for graphs by using discrete-time quantum walks. To this end, we commence by transforming a graph into a directed line graph. The reasons of using the line graph structure are twofold. First, for a graph, its directed line graph is a dual representation and each vertex of the line graph represents a corresponding edge in the original graph. Second, we show that the discrete-time quantum walk can be seen as a walk on the line graph and the state space of the walk is the vertex set of the line graph, i.e., the state space of the walk is the edges of the original graph. As a result, the directed line graph provides an elegant way of developing new edge-based matching kernel based on discrete-time quantum walks. For a pair of graphs, we compute the h-layer depth-based representation for each vertex of their directed line graphs by computing entropic signatures (computed from discrete-time quantum walks on the line graphs) on the family of K-layer expansion subgraphs rooted at the vertex, i.e., we compute the depth-based representations for edges of the original graphs through their directed line graphs. Based on the new representations, we define an edge-based matching method for the pair of graphs by aligning the h-layer depth-based representations computed through the directed line graphs. The new edge-based matching kernel is thus computed by counting the number of matched vertices identified by the matching method on the directed line graphs. Experiments on standard graph datasets demonstrate the effectiveness of our new kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 05C55.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current commercially available mimics contain varying amounts of either the actual explosive/drug or the chemical compound of suspected interest by biological detectors. As a result, there is significant interest in determining the dominant chemical odor signatures of the mimics, often referred to as pseudos, particularly when compared to the genuine contraband material. This dissertation discusses results obtained from the analysis of drug and explosive headspace related to the odor profiles as recognized by trained detection canines. Analysis was performed through the use of headspace solid phase microextraction in conjunction with gas chromatography mass spectrometry (HS-SPME-GC-MS). Upon determination of specific odors, field trials were held using a combination of the target odors with COMPS. Piperonal was shown to be a dominant odor compound in the headspace of some ecstasy samples and a recognizable odor mimic by trained detection canines. It was also shown that detection canines could be imprinted on piperonal COMPS and correctly identify ecstasy samples at a threshold level of approximately 100ng/s. Isosafrole and/or MDP-2-POH show potential as training aid mimics for non-piperonal based MDMA. Acetic acid was shown to be dominant in the headspace of heroin samples and verified as a dominant odor in commercial vinegar samples; however, no common, secondary compound was detected in the headspace of either. Because of the similarities detected within respective explosive classes, several compounds were chosen for explosive mimics. A single based smokeless powder with a detectable level of 2,4-dinitrotoluene, a double based smokeless powder with a detectable level of nitroglycerine, 2-ethyl-1-hexanol, DMNB, ethyl centralite and diphenylamine were shown to be accurate mimics for TNT-based explosives, NG-based explosives, plastic explosives, tagged explosives, and smokeless powders, respectively. The combination of these six odors represents a comprehensive explosive odor kit with positive results for imprint on detection canines. As a proof of concept, the chemical compound PFTBA showed promise as a possible universal, non-target odor compound for comparison and calibration of detection canines and instrumentation. In a comparison study of shape versus vibration odor theory, the detection of d-methyl benzoate and methyl benzoate was explored using canine detectors. While results did not overwhelmingly substantiate either theory, shape odor theory provides a better explanation of the canine and human subject responses.