890 resultados para edge-to-edge grain crushing


Relevância:

50.00% 50.00%

Publicador:

Resumo:

In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We have shown previously that a template model for edge perception successfully predicts perceived blur for a variety of edge profiles (Georgeson, 2001 Journal of Vision 1 438a; Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54). This study concerns the perceived contrast of edges. Our model spatially differentiates the luminance profile, half-wave rectifies this first derivative, and then differentiates again to create the edge's 'signature'. The spatial scale of the signature is evaluated by filtering it with a set of Gaussian derivative operators. This process finds the correlation between the signature and each operator kernel at each position. These kernels therefore act as templates, and the position and scale of the best-fitting template indicate the position and blur of the edge. Our previous finding, that reducing edge contrast reduces perceived blur, can be explained by replacing the half-wave rectifier with a smooth, biased rectifier function (May and Georgeson, 2003 Perception 32 388; May and Georgeson, 2003 Perception 32 Supplement, 46). With the half-wave rectifier, the peak template response R to a Gaussian edge with contrast C and scale s is given by: R=Cp-1/4s-3/2. Hence, edge contrast can be estimated from response magnitude and blur: C=Rp1/4s3/2. Use of this equation with the modified rectifier predicts that perceived contrast will decrease with increasing blur, particularly at low contrasts. Contrast-matching experiments supported this prediction. In addition, the model correctly predicts the perceived contrast of Gaussian edges modified either by spatial truncation or by the addition of a ramp.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We proposed a model of this motion sharpening that invokes a local, nonlinear contrast transducer function (Hammett et al, 1998 Vision Research 38 2099-2108). Response saturation in the transducer compresses or 'clips' the input spatial waveform, rendering the edges as sharper. To explain the increasing distortion of drifting edges at higher speeds, the degree of nonlinearity must increase with speed or temporal frequency. A dynamic contrast gain control before the transducer can account for both the speed dependence and approximate contrast invariance of motion sharpening (Hammett et al, 2003 Vision Research, in press). We show here that this model also predicts perceived sharpening of briefly flashed and flickering edges, and we show that the model can account fairly well for experimental data from all three modes of presentation (motion, flash, and flicker). At moderate durations and lower temporal frequencies the gain control attenuates the input signal, thus protecting it from later compression by the transducer. The gain control is somewhat sluggish, and so it suffers both a slow onset, and loss of power at high temporal frequencies. Consequently, brief presentations and high temporal frequencies of drift and flicker are less protected from distortion, and show greater perceptual sharpening.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the `signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper. We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the 'signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper.We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Edge blur is an important perceptual cue, but how does the visual system encode the degree of blur at edges? Blur could be measured by the width of the luminance gradient profile, peak ^ trough separation in the 2nd derivative profile, or the ratio of 1st-to-3rd derivative magnitudes. In template models, the system would store a set of templates of different sizes and find which one best fits the `signature' of the edge. The signature could be the luminance profile itself, or one of its spatial derivatives. I tested these possibilities in blur-matching experiments. In a 2AFC staircase procedure, observers adjusted the blur of Gaussian edges (30% contrast) to match the perceived blur of various non-Gaussian test edges. In experiment 1, test stimuli were mixtures of 2 Gaussian edges (eg 10 and 30 min of arc blur) at the same location, while in experiment 2, test stimuli were formed from a blurred edge sharpened to different extents by a compressive transformation. Predictions of the various models were tested against the blur-matching data, but only one model was strongly supported. This was the template model, in which the input signature is the 2nd derivative of the luminance profile, and the templates are applied to this signature at the zero-crossings. The templates are Gaussian derivative receptive fields that covary in width and length to form a self-similar set (ie same shape, different sizes). This naturally predicts that shorter edges should look sharper. As edge length gets shorter, responses of longer templates drop more than shorter ones, and so the response distribution shifts towards shorter (smaller) templates, signalling a sharper edge. The data confirmed this, including the scale-invariance implied by self-similarity, and a good fit was obtained from templates with a length-to-width ratio of about 1. The simultaneous analysis of edge blur and edge location may offer a new solution to the multiscale problem in edge detection.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This thesis presents a study of how edges are detected and encoded by the human visual system. The study begins with theoretical work on the development of a model of edge processing, and includes psychophysical experiments on humans, and computer simulations of these experiments, using the model. The first chapter reviews the literature on edge processing in biological and machine vision, and introduces the mathematical foundations of this area of research. The second chapter gives a formal presentation of a model of edge perception that detects edges and characterizes their blur, contrast and orientation, using Gaussian derivative templates. This model has previously been shown to accurately predict human performance in blur matching tasks with several different types of edge profile. The model provides veridical estimates of the blur and contrast of edges that have a Gaussian integral profile. Since blur and contrast are independent parameters of Gaussian edges, the model predicts that varying one parameter should not affect perception of the other. Psychophysical experiments showed that this prediction is incorrect: reducing the contrast makes an edge look sharper; increasing the blur reduces the perceived contrast. Both of these effects can be explained by introducing a smoothed threshold to one of the processing stages of the model. It is shown that, with this modification,the model can predict the perceived contrast and blur of a number of edge profiles that differ markedly from the ideal Gaussian edge profiles on which the templates are based. With only a few exceptions, the results from all the experiments on blur and contrast perception can be explained reasonably well using one set of parameters for each subject. In the few cases where the model fails, possible extensions to the model are discussed.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Purpose. To evaluate the influence of soft contact lens midperipheral shape profile and edge design on the apparent epithelial thickness and indentation of the ocular surface with lens movement. Methods. Four soft contact lens designs comprising of two different plano midperipheral shape profiles and two edge designs (chiseled and knife edge) of silicone-hydrogel material were examined in 26 subjects aged 24.7 ± 4.6 years, each worn bilaterally in randomized order. Lens movement was imaged enface on insertion, at 2 and 4 hours with a high-speed, high-resolution camera simultaneous to the cross-section of the edge of the contact lens interaction with the ocular surface captured using optical coherence tomography (OCT) nasally, temporally, and inferiorly. Optical imaging distortions were individually corrected for by imaging the apparent distortion of a glass slide surface by the removed lens. Results. Apparent epithelial thickness varied with edge position (P < 0.001). When distortion was corrected for, epithelial indentation decreased with time after insertion (P = 0.010), changed after a blink (P < 0.001), and varied with position on the lens edge (P < 0.001), with the latter being affected by midperipheral lens shape profile and edge design. Horizontal and vertical lens movement did not change with time postinsertion. Vertical motion was affected by midperipheral lens shape profile (P < 0.001) and edge design (P < 0.001). Lens movement was associated with physiologic epithelium thickness for lens midperipheral shape profile and edge designs. Conclusions. Dynamic OCT coupled with high-resolution video demonstrated that soft contact lens movement and image-corrected ocular surface indentation were influenced by both lens edge design and midperipheral lens shape profiles. © 2013 The Association for Research in Vision and Ophthalmology, Inc.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A view has emerged within manufacturing and service organizations that the operations management function can hold the key to achieving competitive edge. This has recently been emphasized by the demands for greater variety and higher quality which must be set against a background of increasing cost of resources. As nations' trade barriers are progressively lowered and removed, so producers of goods and service products are becoming more exposed to competition that may come from virtually anywhere around the world. To simply survive in this climate many organizations have found it necessary to improve their manufacturing or service delivery systems. To become real ''winners'' some have adopted a strategic approach to operations and completely reviewed and restructured their approach to production system design and operations planning and control. The articles in this issue of the International journal of Operations & Production Management have been selected to illustrate current thinking and practice in relation to this situation. They are all based on papers presented to the Sixth International Conference of the Operations Management Association-UK which was held at Aston University in June 1991. The theme of the conference was "Achieving Competitive Edge" and authors from 15 countries around the world contributed to more than 80 presented papers. Within this special issue five topic areas are addressed with two articles relating to each. The topics are: strategic management of operations; managing change; production system design; production control; and service operations. Under strategic management of operations De Toni, Filippini and Forza propose a conceptual model which considers the performance of an operating system as a source of competitive advantage through the ''operation value chain'' of design, purchasing, production and distribution. Their model is set within the context of the tendency towards globalization. New's article is somewhat in contrast to the more fashionable literature on operations strategy. It challenges the validity of the current idea of ''world-class manufacturing'' and, instead, urges a reconsideration of the view that strategic ''trade-offs'' are necessary to achieve a competitive edge. The importance of managing change has for some time been recognized within the field of organization studies but its relevance in operations management is now being realized. Berger considers the use of "organization design", ''sociotechnical systems'' and change strategies and contrasts these with the more recent idea of the ''dialogue perspective''. A tentative model is suggested to improve the analysis of different strategies in a situation specific context. Neely and Wilson look at an essential prerequisite if change is to be effected in an efficient way, namely product goal congruence. Using a case study as its basis, their article suggests a method of measuring goal congruence as a means of identifying the extent to which key performance criteria relating to quality, time, cost and flexibility are understood within an organization. The two articles on production systems design represent important contributions to the debate on flexible production organization and autonomous group working. Rosander uses the results from cases to test the applicability of ''flow groups'' as the optimal way of organizing batch production. Schuring also examines cases to determine the reasons behind the adoption of ''autonomous work groups'' in The Netherlands and Sweden. Both these contributions help to provide a greater understanding of the production philosophies which have emerged as alternatives to more conventional systems -------for intermittent and continuous production. The production control articles are both concerned with the concepts of ''push'' and ''pull'' which are the two broad approaches to material planning and control. Hirakawa, Hoshino and Katayama have developed a hybrid model, suitable for multistage manufacturing processes, which combines the benefits of both systems. They discuss the theoretical arguments in support of the system and illustrate its performance with numerical studies. Slack and Correa's concern is with the flexibility characteristics of push and pull material planning and control systems. They use the case of two plants using the different systems to compare their performance within a number of predefined flexibility types. The two final contributions on service operations are complementary. The article by Voss really relates to manufacturing but examines the application of service industry concepts within the UK manufacturing sector. His studies in a number of companies support the idea of the ''service factory'' and offer a new perspective for manufacturing. Harvey's contribution by contrast, is concerned with the application of operations management principles in the delivery of professional services. Using the case of social-service provision in Canada, it demonstrates how concepts such as ''just-in-time'' can be used to improve service performance. The ten articles in this special issue of the journal address a wide range of issues and situations. Their common aspect is that, together, they demonstrate the extent to which competitiveness can be improved via the application of operations management concepts and techniques.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Aim: Contrast sensitivity (CS) provides important information on visual function. This study aimed to assess differences in clinical expediency of the CS increment-matched new back-lit and original paper versions of the Melbourne Edge Test (MET) to determine the CS of the visually impaired. Methods: The back-lit and paper MET were administered to 75 visually impaired subjects (28-97 years). Two versions of the back-lit MET acetates were used to match the CS increments with the paper-based MET. Measures of CS were repeated after 30 min and again in the presence of a focal light source directed onto the MET. Visual acuity was measured with a Bailey-Lovie chart and subjects rated how much difficulty they had with face and vehicle recognition. Results: The back-lit MET gave a significantly higher CS than the paper-based version (14.2 ± 4.1 dB vs 11.3 ± 4.3 dB, p < 0.001). A significantly higher reading resulted with repetition of the paper-based MET (by 1.0 ± 1.7 dB, p < 0.001), but this was not evident with the back-lit MET (by 0.1 ± 1.4 dB, p = 0.53). The MET readings were increased by a focal light source, in both the back-lit (by 0.3 ± 0.81, p < 0.01) and paper-based (1.2 ± 1.7, p < 0.001) versions. CS as measured by the back-lit and paper-based versions of the MET was significantly correlated to patients' perceived ability to recognise faces (r = 0.71, r = 0.85 respectively; p < 0.001) and vehicles (r = 0.67, r = 0.82 respectively; p < 0.001), and with distance visual acuity (both r =-0.64; p < 0.001). Conclusions: The CS increment-matched back-lit MET gives higher CS values than the old paper-based test by approximately 3 dB and is more repeatable and less affected by external light sources. Clinically, the MET score provides information on patient difficulties with visual tasks, such as recognising faces. © 2005 The College of Optometrists.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Background: The Melbourne Edge Test (MET) is a portable forced-choice edge detection contrast sensitivity (CS) test. The original externally illuminated paper test has been superseded by a backlit version. The aim of this study was to establish normative values for age and to assess change with visual impairment. Method: The MET was administered to 168 people with normal vision (18-93 years old) and 93 patients with visual impairment (39-97 years old). Distance visual acuity (VA) was measured with a log MAR chart. Results: In those eyes without disease, MET CS was stable until the age of 50 years (23.8 ± .7 dB) after which it decreased at a rate of ≈1.5 dB per decade. Compared with normative values, people with low vision were found to have significantly reduced CS, which could not be totally accounted for by reduced VA. Conclusions: The MET provides a quick and easy measure of CS, which highlights a reduction in visual function that may not be detectable using VA measurements. © 2004 The College of Optometrists.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper a novel method for an application of digital image processing, Edge Detection is developed. The contemporary Fuzzy logic, a key concept of artificial intelligence helps to implement the fuzzy relative pixel value algorithms and helps to find and highlight all the edges associated with an image by checking the relative pixel values and thus provides an algorithm to abridge the concepts of digital image processing and artificial intelligence. Exhaustive scanning of an image using the windowing technique takes place which is subjected to a set of fuzzy conditions for the comparison of pixel values with adjacent pixels to check the pixel magnitude gradient in the window. After the testing of fuzzy conditions the appropriate values are allocated to the pixels in the window under testing to provide an image highlighted with all the associated edges.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper, we propose a new edge-based matching kernel for graphs by using discrete-time quantum walks. To this end, we commence by transforming a graph into a directed line graph. The reasons of using the line graph structure are twofold. First, for a graph, its directed line graph is a dual representation and each vertex of the line graph represents a corresponding edge in the original graph. Second, we show that the discrete-time quantum walk can be seen as a walk on the line graph and the state space of the walk is the vertex set of the line graph, i.e., the state space of the walk is the edges of the original graph. As a result, the directed line graph provides an elegant way of developing new edge-based matching kernel based on discrete-time quantum walks. For a pair of graphs, we compute the h-layer depth-based representation for each vertex of their directed line graphs by computing entropic signatures (computed from discrete-time quantum walks on the line graphs) on the family of K-layer expansion subgraphs rooted at the vertex, i.e., we compute the depth-based representations for edges of the original graphs through their directed line graphs. Based on the new representations, we define an edge-based matching method for the pair of graphs by aligning the h-layer depth-based representations computed through the directed line graphs. The new edge-based matching kernel is thus computed by counting the number of matched vertices identified by the matching method on the directed line graphs. Experiments on standard graph datasets demonstrate the effectiveness of our new kernel.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

For all their efforts to avoid a nuclear North Korea, the Clinton and Bush administrations failed to achieve this goal, the most important policy objective of the United States in its relations with North Korea for decades, mainly because of inconsistencies in U.S. policy. This dissertation seeks to explain why both administrations ultimately failed to prevent North Korea from going nuclear. It finds the origins of this failure in the implementation of different U.S. policy options toward North Korea during the Clinton and Bush administrations. To explain the lack of policy consistency, the dissertation investigates how the relations between the executive and the legislative branches and, more specifically, different government types—unified government and divided government—have affected U.S. policy toward North Korea. It particularly emphasizes the role of Congress and partisan politics in the making of U.S. policy toward North Korea. This study finds that divided government played a pivotal role. Partisan politics are also central to the explanation: politics did not stop at the water’s edge. A divided U.S. government produced more status quo policies toward North Korea than a unified U.S. government, while a unified government produced more active policies than a divided government. Moreover, a unified government with a Republican President produced more aggressive policies toward North Korea, whereas a unified government with a Democratic President produced more conciliatory policies. This study concludes that the different government types and intensified partisan politics were the main causes of the inconsistencies in the United States’ North Korea policy that led to a nuclear North Korea.