958 resultados para Edge Detection


Relevância:

60.00% 60.00%

Publicador:

Resumo:

介绍了Zernike矩及基于Zernike矩的图像亚像素边缘检测原理,针对Ghosal提出的基于Zernike矩的亚像素图像边缘检测算法检测出的图像存在边缘较粗及边缘亚像素定位精度低等不足,提出了一种改进算法.推导了7×7 Zernike矩模板系数,提出一种新的边缘判断依据.改进的算法能较好检测图像边缘并实现了较高的边缘定位.最后,设计了3组不同的实验.实验结果同Canny算子及Ghosal算法相比,证明了改进算法的优越性.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

基于拼板激光焊接工程中的实时检测系统,应用数学形态学中的相关图像处理技术,提出了一种新的熔池图像处理流程模型,并通过实验得到了三种清晰的熔池边缘图像,同时验证了该模型的正确性。将算法与经典算法进行比较,证明了该算法对于激光拼焊的适用性和鲁棒性。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

通过对Pal.King的模糊边缘检测算法进行改进,提出了一种快速模糊边缘检测算法。该快速算法不但简化了Pal.King算法中复杂的G和G-1运算,而且通过实验,确定了Tr变换中最佳的隶属度阈值,大大地减少了迭代次数。从实验结果中可以看出,该快速算法不但提高了Pal.King算法的效率,而且具有很强的检测模糊边缘和细小边缘的能力。这种快速算法的性能优越,是一种非常实用的、高效的的图像处理算法。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper studies how to more effectively invert seismic data and predict reservoir under complicated sedimentary environment, complex rock physical relationships and fewer drills in offshore areas of China. Based on rock physical and seismic amplitude-preserving process, and according to depositional system and laws of hydrocarbon reservoir, in the light of feature of seismic inversion methods present applied, series methods were studied. A joint inversion technology for complex geological condition had been presented, at the same time the process and method system for reservoir prediction had been established. This method consists four key parts. 1)We presented the new conception called generalized wave impedance, established corresponding inversion process, and provided technical means for joint inversion lithology and petrophysical on complex geological condition. 2)At the aspect of high-resolution nonlinear seismic wave impedance joint inversion, this method used a multistage nonlinear seismic convolution model rather than conventional primary structure Robinson seismic convolution model, and used Caianiello neural network implement inversion. Based on the definition of multistage positive and negative wavelet, it adopted both deterministic and statistical physical mechanism, direct inversion and indirect inversion. It integrated geological knowledge, rock physical theory, well data, and seismic data, and improved the resolution and anti-noise ability of wave impedence inversion. 3)At the aspect of high-resolution nonlinear reservoir physical property joint inversion, this method used nonlinear rock physical model which introduced convolution model into the relationship between wave impedance and porosity/clay. Through multistage decomposition, it handles separately the large- and small-scale components of the impedance-porosity/clay relationships to achieve more accurate rock physical relationships. By means of bidirectional edge detection with wavelets, it uses the Caianiello neural network to finish statistical inversion with combined applications of model-based and deconvolution-based methods. The resulted joint inversion scheme can integrate seismic data, well data, rock physical theory, and geological knowledge for estimation of high-resolution petrophysical parameters. 4)At the aspect of risk assessment of lateral reservoir prediction, this method integrated the seismic lithology identification, petrophysical prediction, multi-scale decomposition of petrophysical parameters, P- and H-spectra, and the match relationship of data got from seismics, well logging and geology. It could describe the complexity of medium preferably. Through applications of the joint inversion of seismic data for lithologic and petrophysical parameters in several selected target areas, the resulted high-resolution lithologic and petrophysical sections(impedance, porosity, clay) show that the joint inversion can significantly improve the spatial description of reservoirs in data sets involving complex deposits. It proved the validity and practicality of this method adequately.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the prediction of complex reservoir with high heterogeneities in lithologic and petrophysical properties, because of inexact data (e.g., information-overlapping, information-incomplete, and noise-contaminated) and ambiguous physical relationship, inversion results suffer from non-uniqueness, instability and uncertainty. Thus, the reservoir prediction technologies based on the linear assumptions are unsuited for these complex areas. Based on the limitations of conventional technologies, the thesis conducts a series of researches on various kernel problems such as inversions from band-limited seismic data, inversion resolution, inversion stability, and ambiguous physical relationship. The thesis combines deterministic, statistical and nonlinear theories of geophysics, and integrates geological information, rock physics, well data and seismic data to predict lithologic and petrophysical parameters. The joint inversion technology is suited for the areas with complex depositional environment and complex rock-physical relationship. Combining nonlinear multistage Robinson seismic convolution model with unconventional Caianiello neural network, the thesis implements the unification of the deterministic and statistical inversion. Through Robinson seismic convolution model and nonlinear self-affine transform, the deterministic inversion is implemented by establishing a deterministic relationship between seismic impedance and seismic responses. So, this can ensure inversion reliability. Furthermore, through multistage seismic wavelet (MSW)/seismic inverse wavelet (MSIW) and Caianiello neural network, the statistical inversion is implemented by establishing a statistical relationship between seismic impedance and seismic responses. Thus, this can ensure the anti-noise ability. In this thesis, direct and indirect inversion modes are alternately used to estimate and revise the impedance value. Direct inversion result is used as the initial value of indirect inversion and finally high-resolution impedance profile is achieved by indirect inversion. This largely enhances inversion precision. In the thesis, a nonlinear rock physics convolution model is adopted to establish a relationship between impedance and porosity/clay-content. Through multistage decomposition and bidirectional edge wavelet detection, it can depict more complex rock physical relationship. Moreover, it uses the Caianiello neural network to implement the combination of deterministic inversion, statistical inversion and nonlinear theory. Last, by combined applications of direct inversion based on vertical edge detection wavelet and indirect inversion based on lateral edge detection wavelet, it implements the integrative application of geological information, well data and seismic impedance for estimation of high-resolution petrophysical parameters (porosity/clay-content). These inversion results can be used to reservoir prediction and characterization. Multi-well constrains and separate-frequency inversion modes are adopted in the thesis. The analyses of these sections of lithologic and petrophysical properties show that the low-frequency sections reflect the macro structure of the strata, while the middle/high-frequency sections reflect the detailed structure of the strata. Therefore, the high-resolution sections can be used to recognize the boundary of sand body and to predict the hydrocarbon zones.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of detecting intensity changes in images is canonical in vision. Edge detection operators are typically designed to optimally estimate first or second derivative over some (usually small) support. Other criteria such as output signal to noise ratio or bandwidth have also been argued for. This thesis is an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of an edge operator. Variational techniques are used to find a solution over the space of all linear shift invariant operators. The first criterion is that the detector have low probability of error i.e. failing to mark edges or falsely marking non-edges. The second is that the marked points should be as close as possible to the centre of the true edge. The third criterion is that there should be low probability of more than one response to a single edge. The technique is used to find optimal operators for step edges and for extended impulse profiles (ridges or valleys in two dimensions). The extension of the one dimensional operators to two dimentions is then discussed. The result is a set of operators of varying width, length and orientation. The problem of combining these outputs into a single description is discussed, and a set of heuristics for the integration are given.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A digital differentiator simply involves the derivation of an input signal. This work includes the presentation of first-degree and second-degree differentiators, which are designed as both infinite-impulse-response (IIR) filters and finite-impulse-response (FIR) filters. The proposed differentiators have low-pass magnitude response characteristics, thereby rejecting noise frequencies higher than the cut-off frequency. Both steady-state frequency-domain characteristics and Time-domain analyses are given for the proposed differentiators. It is shown that the proposed differentiators perform well when compared to previously proposed filters. When considering the time-domain characteristics of the differentiators, the processing of quantized signals proved especially enlightening, in terms of the filtering effects of the proposed differentiators. The coefficients of the proposed differentiators are obtained using an optimization algorithm, while the optimization objectives include magnitude and phase response. The low-pass characteristic of the proposed differentiators is achieved by minimizing the filter variance. The low-pass differentiators designed show the steep roll-off, as well as having highly accurate magnitude response in the pass-band. While having a history of over three hundred years, the design of fractional differentiator has become a ‘hot topic’ in recent decades. One challenging problem in this area is that there are many different definitions to describe the fractional model, such as the Riemann-Liouville and Caputo definitions. Through use of a feedback structure, based on the Riemann-Liouville definition. It is shown that the performance of the fractional differentiator can be improved in both the frequency-domain and time-domain. Two applications based on the proposed differentiators are described in the thesis. Specifically, the first of these involves the application of second degree differentiators in the estimation of the frequency components of a power system. The second example concerns for an image processing, edge detection application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ecohydrodynamics investigates the hydrodynamic constraints on ecosystems across different temporal and spatial scales. Ecohydrodynamics play a pivotal role in the structure and functioning of marine ecosystems, however the lack of integrated complex flow models for deep-water ecosystems beyond the coastal zone prevents further synthesis in these settings. We present a hydrodynamic model for one of Earth's most biologically diverse deep-water ecosystems, cold-water coral reefs. The Mingulay Reef Complex (western Scotland) is an inshore seascape of cold-water coral reefs formed by the scleractinian coral Lophelia pertusa. We applied single-image edge detection and composite front maps using satellite remote sensing, to detect oceanographic fronts and peaks of chlorophyll a values that likely affect food supply to corals and other suspension-feeding fauna. We also present a high resolution 3D ocean model to incorporate salient aspects of the regional and local oceanography. Model validation using in situ current speed, direction and sea elevation data confirmed the model's realistic representation of spatial and temporal aspects of circulation at the reef complex including a tidally driven current regime, eddies, and downwelling phenomena. This novel combination of 3D hydrodynamic modelling and remote sensing in deep-water ecosystems improves our understanding of the temporal and spatial scales of ecological processes occurring in marine systems. The modelled information has been integrated into a 3D GIS, providing a user interface for visualization and interrogation of results that allows wider ecological application of the model and that can provide valuable input for marine biodiversity and conservation applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Severity of left ventricular hypertrophy (LVH) correlates with elevated plasma levels of neuropeptide Y (NPY) in hypertension. NPY elicits positive and negative contractile effects in cardiomyocytes through Y(1) and Y(2) receptors, respectively. This study tested the hypothesis that NPY receptor-mediated contraction is altered during progression of LVH. Ventricular cardiomyocytes were isolated from spontaneously hypertensive rats (SHRs) pre-LVH (12 weeks), during development (16 weeks), and at established LVH (20 weeks) and age-matched normotensive Wistar Kyoto (WKY) rats. Electrically stimulated (60 V, 0.5 Hz) cell shortening was measured using edge detection and receptor expression determined at mRNA and protein level. The NPY and Y(1) receptor-selective agonist, Leu(31)Pro(34)NPY, stimulated increases in contractile amplitude, which were abolished by the Y(1) receptor-selective antagonist, BIBP3226 [R-N(2)-(diphenyl-acetyl)-N-(4-hydroxyphenyl)methyl-argininamide)], confirming Y(1) receptor involvement. Potencies of both agonists were enhanced in SHR cardiomyocytes at 20 weeks (2300- and 380-fold versus controls). Maximal responses were not attenuated. BIBP3226 unmasked a negative contraction effect of NPY, elicited over the concentration range (10(-12) to 3 x 10(-9) M) in which NPY and PYY(3-36) attenuated the positive contraction effects of isoproterenol, the potencies of which were increased in cardiomyocytes from SHRs at 20 weeks (175- and 145-fold versus controls); maximal responses were not altered. Expression of NPY-Y(1) and NPY-Y(2) receptor mRNAs was decreased (55 and 69%) in left ventricular cardiomyocytes from 20-week-old SHRs versus age-matched WKY rats; parallel decreases (32 and 80%) were observed at protein level. Enhancement of NPY potency, producing (opposing) contractile effects on cardiomyocytes together with unchanged maximal response despite reduced receptor number, enables NPY to contribute to regulating cardiac performance during compensatory LVH.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For modern FPGA, implementation of memory intensive processing applications such as high end image and video processing systems necessitates manual design of complex multilevel memory hierarchies incorporating off-chip DDR and onchip BRAM and LUT RAM. In fact, automated synthesis of multi-level memory hierarchies is an open problem facing high level synthesis technologies for FPGA devices. In this paper we describe the first automated solution to this problem.
By exploiting a novel dataflow application modelling dialect, known as Valved Dataflow, we show for the first time how, not only can such architectures be automatically derived, but also that the resulting implementations support real-time processing for current image processing application standards such as H.264. We demonstrate the viability of this approach by reporting the performance and cost of hierarchies automatically generated for Motion Estimation, Matrix Multiplication and Sobel Edge Detection applications on Virtex-5 FPGA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Realising high performance image and signal processing
applications on modern FPGA presents a challenging implementation problem due to the large data frames streaming through these systems. Specifically, to meet the high bandwidth and data storage demands of these applications, complex hierarchical memory architectures must be manually specified
at the Register Transfer Level (RTL). Automated approaches which convert high-level operation descriptions, for instance in the form of C programs, to an FPGA architecture, are unable to automatically realise such architectures. This paper
presents a solution to this problem. It presents a compiler to automatically derive such memory architectures from a C program. By transforming the input C program to a unique dataflow modelling dialect, known as Valved Dataflow (VDF), a mapping and synthesis approach developed for this dialect can
be exploited to automatically create high performance image and video processing architectures. Memory intensive C kernels for Motion Estimation (CIF Frames at 30 fps), Matrix Multiplication (128x128 @ 500 iter/sec) and Sobel Edge Detection (720p @ 30 fps), which are unrealisable by current state-of-the-art C-based synthesis tools, are automatically derived from a C description of the algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is an exciting era for molecular computation because molecular logic gates are being pushed in new directions. The use of sulfur rather than the commonplace nitrogen as the key receptor atom in metal ion sensors is one of these directions; plant cells coming within the jurisdiction of fluorescent molecular thermometers is another, combining photochromism with voltammetry for molecular electronics is yet another. Two-input logic gates benefit from old ideas such as rectifying bilayer electrodes, cyclodextrin-enhanced room-temperature phosphorescence, steric hindrance, the polymerase chain reaction, charge transfer absorption of donor–acceptor complexes and lectin–glycocluster interactions. Furthermore, the concept of photo-uncaging enables rational ways of concatenating logic gates. Computational concepts are also applied to potential cancer theranostics and to the selective monitoring of neurotransmitters in situ. Higher numbers of inputs are also accommodated with the concept of functional integration of gates, where complex input–output patterns are sought out and analysed. Molecular emulation of computational components such as demultiplexers and parity generators/checkers are achieved in related ways. Complexity of another order is tackled with molecular edge detection routines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Field programmable gate array devices boast abundant resources with which custom accelerator components for signal, image and data processing may be realised; however, realising high performance, low cost accelerators currently demands manual register transfer level design. Software-programmable ’soft’ processors have been proposed as a way to reduce this design burden but they are unable to support performance and cost comparable to custom circuits. This paper proposes a new soft processing approach for FPGA which promises to overcome this barrier. A high performance, fine-grained streaming processor, known as a Streaming Accelerator Element, is proposed which realises accelerators as large scale custom multicore networks. By adopting a streaming execution approach with advanced program control and memory addressing capabilities, typical program inefficiencies can be almost completely eliminated to enable performance and cost which are unprecedented amongst software-programmable solutions. When used to realise accelerators for fast fourier transform, motion estimation, matrix multiplication and sobel edge detection it is shown how the proposed architecture enables real-time performance and with performance and cost comparable with hand-crafted custom circuit accelerators and up to two orders of magnitude beyond existing soft processors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Realising memory intensive applications such as image and video processing on FPGA requires creation of complex, multi-level memory hierarchies to achieve real-time performance; however commerical High Level Synthesis tools are unable to automatically derive such structures and hence are unable to meet the demanding bandwidth and capacity constraints of these applications. Current approaches to solving this problem can only derive either single-level memory structures or very deep, highly inefficient hierarchies, leading in either case to one or more of high implementation cost and low performance. This paper presents an enhancement to an existing MC-HLS synthesis approach which solves this problem; it exploits and eliminates data duplication at multiple levels levels of the generated hierarchy, leading to a reduction in the number of levels and ultimately higher performance, lower cost implementations. When applied to synthesis of C-based Motion Estimation, Matrix Multiplication and Sobel Edge Detection applications, this enables reductions in Block RAM and Look Up Table (LUT) cost of up to 25%, whilst simultaneously increasing throughput.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present an improved model for line and edge detection in cortical area V1. This model is based on responses of simple and complex cells, and it is multi-scale with no free parameters. We illustrate the use of the multi-scale line/edge representation in different processes: visual reconstruction or brightness perception, automatic scale selection and object segregation. A two-level object categorization scenario is tested in which pre-categorization is based on coarse scales only and final categorization on coarse plus fine scales. We also present a multi-scale object and face recognition model. Processing schemes are discussed in the framework of a complete cortical architecture. The fact that brightness perception and object recognition may be based on the same symbolic image representation is an indication that the entire (visual) cortex is involved in consciousness.