981 resultados para Exponential smoothing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文在文[1]所提出的α指数技巧的基础上,进一步提出了线条叠切法、切叠法、叠切叠法,应用这些方法建立了字符状图样的单一方程,并说明所有常用字符均可列出其近似的单一方程式。最后指出了叠切叠法在一般工程问题中的应用。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文提出α指数技巧以建立种种字符状图样的单一方程,说明一般常用字符的近似的单一方程式都可列出来.最后,还指出了数据与方程的符合验算,并进行若干讨论.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

基于PC和多轴运动控制器的开放式数控系统是理想的开放式数控系统。介绍了基于PMAC的开放式数控系统结构形式,PMAC的差补、位置控制、伺服功能、以PMAC和PC机为硬件平台搭建了数控系统,并对其硬件构成和软件设计结构进行了分析。着重从软件设计的角度,介绍了PTALK控件的功能和作用,对数控系统软件构成进行了详细的阐述。并设计出了友好的用户界面,在实际应用中具有重要意义。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The theory researches of prediction about stratigraphic filtering in complex condition are carried out, and three key techniques are put forward in this dissertation. Theoretical aspects: The prediction equations for both slant incidence in horizontally layered medium and that in laterally variant velocity medium are expressed appropriately. Solving the equations, the linear prediction operator of overlaid layers, then corresponding reflection/transmission operators, can be obtained. The properties of linear prediction operator are elucidated followed by putting forward the event model for generalized Goupillaud layers. Key technique 1: Spectral factorization is introduced to solve the prediction equations in complex condition and numerical results are illustrated. Key technique 2: So-called large-step wavefield extrapolation of one-way wave under laterally variant velocity circumstance is studied. Based on Lie algebraic integral and structure preserving algorithm, large-step wavefield depth extrapolation scheme is set forth. In this method, the complex phase of wavefield extrapolation operator’s symbol is expressed as a linear combination of wavenumbers with the coefficients of this linear combination in the form of the integral of interval velocity and its derivatives over depth. The exponential transform of the complex phase is implemented through phase shifting, BCH splitting and orthogonal polynomial expansion. The results of numerical test show that large-step scheme takes on a great number of advantages as low accumulating error, cheapness, well adaptability to laterally variant velocity, small dispersive, etc. Key technique 3: Utilizing large-step wavefield extrapolation scheme and based on the idea of local harmonic decomposition, the technique generating angle gathers for 2D case is generalized to 3D case so as to solve the problems generating and storing 3D prestack angle gathers. Shot domain parallel scheme is adopted by which main duty for servant-nodes is to compute trigonometric expansion coefficients, while that for host-node is to reclaim them with which object-oriented angle gathers yield. In theoretical research, many efforts have been made in probing into the traits of uncertainties within macro-dynamic procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Second Round of Oil & Gas Exploration needs more precision imaging method, velocity vs. depth model and geometry description on Complicated Geological Mass. Prestack time migration on inhomogeneous media was the technical basic of velocity analysis, prestack time migration on Rugged surface, angle gather and multi-domain noise suppression. In order to realize this technique, several critical technical problems need to be solved, such as parallel computation, velocity algorithm on ununiform grid and visualization. The key problem is organic combination theories of migration and computational geometry. Based on technical problems of 3-D prestack time migration existing in inhomogeneous media and requirements from nonuniform grid, parallel process and visualization, the thesis was studied systematically on three aspects: Infrastructure of velocity varies laterally Green function traveltime computation on ununiform grid, parallel computational of kirchhoff integral migration and 3D visualization, by combining integral migration theory and Computational Geometry. The results will provide powerful technical support to the implement of prestack time migration and convenient compute infrastructure of wave number domain simulation in inhomogeneous media. The main results were obtained as follows: 1. Symbol of one way wave Lie algebra integral, phase and green function traveltime expressions were analyzed, and simple 2-D expression of Lie algebra integral symbol phase and green function traveltime in time domain were given in inhomogeneous media by using pseudo-differential operators’ exponential map and Lie group algorithm preserving geometry structure. Infrastructure calculation of five parts, including derivative, commutating operator, Lie algebra root tree, exponential map root tree and traveltime coefficients , was brought forward when calculating asymmetry traveltime equation containing lateral differential in 3-D by this method. 2. By studying the infrastructure calculation of asymmetry traveltime in 3-D based on lateral velocity differential and combining computational geometry, a method to build velocity library and interpolate on velocity library using triangulate was obtained, which fit traveltime calculate requirements of parallel time migration and velocity estimate. 3. Combining velocity library triangulate and computational geometry, a structure which was convenient to calculate differential in horizontal, commutating operator and integral in vertical was built. Furthermore, recursive algorithm, for calculating architecture on lie algebra integral and exponential map root tree (Magnus in Math), was build and asymmetry traveltime based on lateral differential algorithm was also realized. 4. Based on graph theory and computational geometry, a minimum cycle method to decompose area into polygon blocks, which can be used as topological representation of migration result was proposed, which provided a practical method to block representation and research to migration interpretation results. 5. Based on MPI library, a process of bringing parallel migration algorithm at arbitrary sequence traces into practical was realized by using asymmetry traveltime based on lateral differential calculation and Kirchhoff integral method. 6. Visualization of geological data and seismic data were studied by the tools of OpenGL and Open Inventor, based on computational geometry theory, and a 3D visualize system on seismic imaging data was designed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The unsaturated expansive soil is a hotspot and difficulty in soil mechanics inland and outland. The expansive soil in our China is one of the widest in distributing and greatest in area, and the disaster of expansive soil happens continually as a result. The soil mechanics test, monitor, numerical simulation and engineering practice are used to research swell and shrinkage characteristic, edge strength characteristic and unsaturated strength characteristic of Mengzi expansive soil. The seep and stability of the slope for expansive soil associated with fissure are analyzed and two kinds of new technique are put forward to be used in expansive soil area, based on disaster mechnics proposed of the slope.The technique of reinforcement in road embankment is optimized also. Associated with engineering geology research of Mengzi expansive soil, mineral composition, chemical composition, specific area and cation content, dissolubility salt and agglutinate, microcosmic fabric characteristic, cause of formation and atmosphere effect depth are analyzed to explain the intrinsic cause and essence of swell and shrinkage for expansive soil. The rule between swell-shrinkage and initial state, namely initial water content, initial dry density and initial pressure, can be used to construction control. Does Response model is fit to simulate the rule, based on ternary regression analysis. It has great meaning to expansive soil engineering in area with salt or alkali. The mechanics under CD, CU and GCU of expansive soil is researched by edge surface theory to explain the remarkable effect of consolidation pressure, initial dry density, initial water content, cut velocity, drainage and reinforcement to the edge strength characteristic. The infirm hardening stress strain curves can be fitted with hyperbola model and the infirm softening curves can be fitted with exponential model. The normalization theory can be used to reveal the intrinsic unity of the otherness which is brought by different methods to the shear strength of the same kinds of samples. The unsaturated strain softening characteristic and strength envelope of remolding samples are researched by triaxial shear test based on suction controlled, the result of which is simulated by exponential function. The strength parameters of the unsaturated samples are obtained to be used in the unsaturated seep associated with rainfall. The elasticity and plasticity characters of expansive soil are researched to attain the model parameters by using modified G-A model. The humidification destroy characteristic of expansive soil is discussed to research the disaster mechanism of the slope with the back pressure increasing and suction decreasing under bias pressure consolidation. The indoor and outdoor SWCCs are measured to research the effect factors and the rule between different stress and filling environment. The moisture absorption curves can express the relationship between suction and water content in locale. The SWCCs of Mengzi expansive soil are measured by GDS stress path trixial system. The unsaturated infiltration function is gained to research seep and stability of the slope of expansive soil. The rainfall infiltration and ability of slope considering multifarious factors are studied by analyzing fissure cause of Mengzi expansive soil. The mechanism of the slope disaster is brought forward by the double controlling effect between suction and fissure. Two new kinds of technique are put forward to resolve disaster of expansive soil and the technique of reinforcement on embankment is optimized, which gives a useful help to solving engineering trouble.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rock mass is widely recognized as a kind of geologic body which consists of rock blocks and discontinuities. The deformation and failure of rock mass is not only determined by rock block,but also by discontinuity which is virtually more important. Mutual cutting and combination of discontinuities controlled mechanical property of rock mass. The complex cutting of discontinuities determine the intense anisotropy on mechanical property of rock mass,especially under the effect of ground stress. Engineering practice has show that the brittle failure of hard rock always occurs when its working stress is far lower than the yield strength and compressive strength,the failure always directly related to the fracture propagation of discontinuities. Fracture propagation of discontinuities is the virtue of hard rock’s failure. We can research the rock mass discontinuous mechanical properties precisely by the methods of statistical analysis of discontinuities and Fracture Mechanics. According to Superposition Principle in Fracture Mechanics,A Problem or C Problem could be chosen to research. Problem A mainly calculates the crack-tip stress field and displacement field on internal discontinuities by numerical method. Problem C calculate the crack-tip stress field and displacement field under the assumption of that the mainly rock mass stress field has been known. So the Problem C avoid the complex mutual interference of stress fields of discontinuities,which is called crack system problem in Fracture Mechanics. To solve Problem C, field test on stress field in the rock mass is needed. The linear Superposition of discontinuities strain energies are Scientific and Rational. The difference of Fracture Mechanics between rock mass and other materials can mostly expression as:other materials Fracture Mechanics mostly face the problem A,and can’t avoid multi-crack puzzle, while the Rock mass Fracture Mechanics answer to the Problem C. Problem C can avoid multi-discontinuities mutual interference puzzle via the ground stress test. On the basis of Problem C, Fracture Mechanics could be used conveniently in rock mass. The rock mass statistics fracture constitutive relations, which introduced in this article, are based on the Problem C and the Discontinuity Strain Energy linear superposition. This constitutive relation has several merits: first, it is physical constitutive relation rather than empirical; second, it is very fit to describe the rock mass anisotropy properties; third, it elaborates the exogenous factors such as ground stress. The rock mass statistics fracture constitutive relation is the available approach to answer to the physical, anisotropic and ground stress impacted rock mass problems. This article stand on the foundation of predecessor’s statistics fractures constitutive relation, and improved the discontinuity distributive function. This article had derived the limitation of negative exponential distribution in the course of regression analysis, and advocated to using the two parameter negative exponential distribution for instead. In order to solve the problems of two-dimension stability on engineering key cross-sectional view in rock mass, this article derived the rock mass planar flexibility tensor, and established rock mass two-dimension penetrate statistics fracture constitutive relation on the basis of penetrate fracture mechanics. Based on the crack tip plasticity research production of penetrate fracture, for example the Irwin plasticity equifinality crack, this article established the way to deal with the discontinuity stress singularity and plastic yielding problem at discontinuity tip. The research on deformation parameters is always the high light region of rock mass mechanics field. After the dam foundation excavation of XiaoWan hydroelectric power station, dam foundation rock mass upgrowthed a great deal of unload cracks, rock mass mechanical property gotten intricacy and strong anisotropy. The dam foundation rock mass mostly upgrowthed three group discontinuities: the decantation discontinuity, the steep pitch discontinuity, and the schistosity plane. Most of the discontinuities have got partial unload looseness. In accordance with ground stress field data, the dam foundation stress field greatly non-uniform, which felled under the great impaction of tectonic stress field, self-weight stress field, excavation geometric boundary condition, and excavation, unload. The discontinuity complexity and stress field heterogeneity, created the rock mass mechanical property of dam foundation intricacy and levity. The research on the rock mass mechanics, if not take every respected influencing factor into consideration as best as we can, major errors likely to be created. This article calculated the rock mass elastic modulus that after Xiao Wan hydroelectric power station dam foundation gutter excavation finished. The calculation region covered possession monolith of Xiao Wan concrete double-curvature arch dam. Different monolith were adopted the penetrate fracture statistics constitutive relation or bury fracture statistics constitutive relation selectively. Statistics fracture constitutive relation is fit for the intensity anisotropy and heterogeneity rock mass of Xiao Wan hydroelectric power station dam foundation. This article had contrastive analysis the statistics fracture constitutive relation result with the inclined plane load test actual measurement elastic modulus and RMR method estimated elastic modulus, and find that the three methods elastic modulus have got greatly comparability. So, the statistics fracture constitutive relations are qualified for trust. Generally speaking,this article had finished following works based on predecessors job: “Argumentation the C Problems of superposition principle in Fracture Mechanics, establish two-dimension penetrate statistics fracture constitutive relation of rock mass, argue the negative exponential distribution limitation and improve it, improve of the three-dimension berry statistics fracture constitutive relation of rock mass, discontinuity-tip plastic zone isoeffect calculation, calculate the rock mass elastic modulus on two-dimension cross-sectional view”. The whole research clue of this article inherited from the “statistics rock mass mechanics” of Wu Faquan(1992).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The processes of seismic wave propagation in phase space and one way wave extrapolation in frequency-space domain, if without dissipation, are essentially transformation under the action of one parameter Lie groups. Consequently, the numerical calculation methods of the propagation ought to be Lie group transformation too, which is known as Lie group method. After a fruitful study on the fast methods in matrix inversion, some of the Lie group methods in seismic numerical modeling and depth migration are presented here. Firstly the Lie group description and method of seismic wave propagation in phase space is proposed, which is, in other words, symplectic group description and method for seismic wave propagation, since symplectic group is a Lie subgroup and symplectic method is a special Lie group method. Under the frame of Hamiltonian, the propagation of seismic wave is a symplectic group transformation with one parameter and consequently, the numerical calculation methods of the propagation ought to be symplectic method. After discrete the wave field in time and phase space, many explicit, implicit and leap-frog symplectic schemes are deduced for numerical modeling. Compared to symplectic schemes, Finite difference (FD) method is an approximate of symplectic method. Consequently, explicit, implicit and leap-frog symplectic schemes and FD method are applied in the same conditions to get a wave field in constant velocity model, a synthetic model and Marmousi model. The result illustrates the potential power of the symplectic methods. As an application, symplectic method is employed to give synthetic seismic record of Qinghai foothills model. Another application is the development of Ray+symplectic reverse-time migration method. To make a reasonable balance between the computational efficiency and accuracy, we combine the multi-valued wave field & Green function algorithm with symplectic reverse time migration and thus develop a new ray+wave equation prestack depth migration method. Marmousi model data and Qinghai foothills model data are processed here. The result shows that our method is a better alternative to ray migration for complex structure imaging. Similarly, the extrapolation of one way wave in frequency-space domain is a Lie group transformation with one parameter Z and consequently, the numerical calculation methods of the extrapolation ought to be Lie group methods. After discrete the wave field in depth and space, the Lie group transformation has the form of matrix exponential and each approximation of it gives a Lie group algorithm. Though Pade symmetrical series approximation of matrix exponential gives a extrapolation method which is traditionally regarded as implicit FD migration, it benefits the theoretic and applying study of seismic imaging for it represent the depth extrapolation and migration method in a entirely different way. While, the technique of coordinates of second kind for the approximation of the matrix exponential begins a new way to develop migration operator. The inversion of matrix plays a vital role in the numerical migration method given by Pade symmetrical series approximation. The matrix has a Toepelitz structure with a helical boundary condition and is easy to inverse with LU decomposition. A efficient LU decomposition method is spectral factorization. That is, after the minimum phase correlative function of each array of matrix had be given by a spectral factorization method, all of the functions are arranged in a position according to its former location to get a lower triangular matrix. The major merit of LU decomposition with spectral factorization (SF Decomposition) is its efficiency in dealing with a large number of matrixes. After the setup of a table of the spectral factorization results of each array of matrix, the SF decomposition can give the lower triangular matrix by reading the table. However, the relationship among arrays is ignored in this method, which brings errors in decomposition method. Especially for numerical calculation in complex model, the errors is fatal. Direct elimination method can give the exact LU decomposition But even it is simplified in our case, the large number of decomposition cost unendurable computer time. A hybrid method is proposed here, which combines spectral factorization with direct elimination. Its decomposition errors is 10 times little than that of spectral factorization, and its decomposition speed is quite faster than that of direct elimination, especially in dealing with a large number of matrix. With the hybrid method, the 3D implicit migration can be expected to apply on real seismic data. Finally, the impulse response of 3D implicit migration operator is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic wave field numerical modeling and seismic migration imaging based on wave equation have become useful and absolutely necessarily tools for imaging of complex geological objects. An important task for numerical modeling is to deal with the matrix exponential approximation in wave field extrapolation. For small value size matrix exponential, we can approximate the square root operator in exponential using different splitting algorithms. Splitting algorithms are usually used on the order or the dimension of one-way wave equation to reduce the complexity of the question. In this paper, we achieve approximate equation of 2-D Helmholtz operator inversion using multi-way splitting operation. Analysis on Gauss integral and coefficient of optimized partial fraction show that dispersion may accumulate by splitting algorithms for steep dipping imaging. High-order symplectic Pade approximation may deal with this problem, However, approximation of square root operator in exponential using splitting algorithm cannot solve dispersion problem during one-way wave field migration imaging. We try to implement exact approximation through eigenfunction expansion in matrix. Fast Fourier Transformation (FFT) method is selected because of its lowest computation. An 8-order Laplace matrix splitting is performed to achieve a assemblage of small matrixes using FFT method. Along with the introduction of Lie group and symplectic method into seismic wave-field extrapolation, accurate approximation of matrix exponential based on Lie group and symplectic method becomes the hot research field. To solve matrix exponential approximation problem, the Second-kind Coordinates (SKC) method and Generalized Polar Decompositions (GPD) method of Lie group are of choice. SKC method utilizes generalized Strang-splitting algorithm. While GPD method utilizes polar-type splitting and symmetric polar-type splitting algorithm. Comparing to Pade approximation, these two methods are less in computation, but they can both assure the Lie group structure. We think SKC and GPD methods are prospective and attractive in research and practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the improving of mantle convection theory, the developing of computing method and increasing of the measurement data, we can numerically simulate more clearly about the effects on some geophysical observed phenomenons such as the global heat flow and global lithospheric stress field in the Earth's surface caused by mantle convection, which is the primary mechanism for the transport of heat from the Earth's deep interior to its surface and the underlying force mechanism of dynamics in the Earth.Chapter 1 reviews the historical background and present research state of mantle convection theory.In Chapter 2, the basic conception of thermal convection and the basic theory about mantle flow.The effects on generation and distribution of global lithospheric stres s field induced by mantle flow are the subject of Chapter 3. Mantle convection causes normal stress and tangential stresses at the bottom of the lithosphere, and then the sublithospheric stress field induces the lithospheric deformation as sixrface force and results in the stress field within the lithosphere. The simulation shows that the agreement between predictions and observations is good in most regions. Most of subduction zones and continental collisions are under compressive. While ocean ridges, such as the east Pacific ridge, the Atlantic ridge and the east African rift valley, are under tensile. And most of the hotspots preferentially occur in regions where calculated stress is tensile. The calculated directions of the most compressive principal horizontal stress are largely in accord with that of the observation except for some regions such as the NW-Pacifie subduction zone and Qinghai-Tibet Plateau, in which the directions of the most compressive principal horizontal stress are different. It shows that the mantel flow plays an important role in causing or affecting the large-scale stress field within the lithosphere.The global heat flow simulation based on a kinematic model of mantle convection is given in Chapter 4. Mantle convection velocities are calculated based on the internal loading theory at first, the velocity field is used as the input to solve the thermal problem. Results show that calculated depth derivatives of the near surface temperature are closely correlated to the observed surface heat flow pattern. Higher heat flow values around midocean ridge systems can be reproduced very well. The predicted average temperature as a function of function of depth reveals that there are two thermal boundary layers, one is close to the surface and another is close to the core-mantle boundary, the rest of the mantle is nearly isothermal. Although, in most of the mantle, advection dominates the heat transfer, the conductive heat transfer is still locally important in the boundary layers and plays an important role for the surface heat flow pattern. The existence of surface plates is responsible for the long wavelength surface heat flow pattern.In Chapter 5, the effects on present-day crustal movement in the China Mainland resulted from the mantle convection are introduced. Using a dynamic method, we present a quantitative model for the present-day crustal movement in China. We consider not only the effect of the India-Eurasia collision, the gravitational potential energy difference of the Tibet Plateau, but also the contribution of the shear traction on the bottom of the lithosphere induced by the global mantle convection. The comparison between our results and the velocity field obtained from the GPS observation shows that our model satisfactorily reproduces the general picture of crustal deformation in China. Numerical modeling results reveal that the stress field on the base of the lithosphere induced by the mantle flow is probably a considerable factor that causes the movement and deformation of the lithosphere in continental China with its eflfcet focuing on the Eastern China A numerical research on the small-scale convection with variable viscosity in the upper mantle is introduced in Chapter 6. Based on a two-dimensional model, small-scale convection in the mantle-lithosphere system with variable viscosity is researched by using of finite element method. Variation of viscosity in exponential form with temperature is considered in this paper The results show that if viscosity is strongly temperature-dependent, the upper part of the system does not take a share in the convection and a stagnant lid, which is identified as lithosphere, is formed on the top of system because of low temperature and high viscosity. The calculated surface heat flow, topography and gravity anomaly are associated well with the convection pattern, namely, the regions with high heat flow and uplift correspond to the upwelling flow, and vice versa.In Chapter 7, we give a brief of future research subject: The inversion of lateral density heterogeneity in the mantle by minimizing the viscous dissipation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Repeated-batch cultures of strawberry cells (Fragaria ananassa cv. Shikinari) subjected to four medium-shift procedures (constant LS medium, constant B5 medium, alternation between LS and B5 starting from LS and alternation between LS and B5 starting from B5) were investigated for the enhanced anthocyanin productivity. To determine the optimum period for repeated batch cultures, two medium-shift periods of 9 and 14 days were studied, which represent the end of the exponential growth phase and the stationary phase. By comparison with the corresponding batch cultures, higher anthocyanin productivity was achieved for all the repeated-batch cultures at a 9-day medium-shift period. The average anthocyanin productivity was enhanced 1.7-and 1.76-fold by repeated-batch cultures in constant LS and constant B5 medium at a 9-day shift period for 45 days, respectively. No further improvement was observed when the medium was alternated between LS (the growth medium) and B5 (the production medium). Anthocyanin production was unstable at a 14-day shift period regardless of the medium-shift procedures. The results show that it is feasible to improve anthocyanin production by a repeated-batch culture of strawberry cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of matching model and sensory data features in the presence of geometric uncertainty, for the purpose of object localization and identification. The problem is to construct sets of model feature and sensory data feature pairs that are geometrically consistent given that there is uncertainty in the geometry of the sensory data features. If there is no geometric uncertainty, polynomial-time algorithms are possible for feature matching, yet these approaches can fail when there is uncertainty in the geometry of data features. Existing matching and recognition techniques which account for the geometric uncertainty in features either cannot guarantee finding a correct solution, or can construct geometrically consistent sets of feature pairs yet have worst case exponential complexity in terms of the number of features. The major new contribution of this work is to demonstrate a polynomial-time algorithm for constructing sets of geometrically consistent feature pairs given uncertainty in the geometry of the data features. We show that under a certain model of geometric uncertainty the feature matching problem in the presence of uncertainty is of polynomial complexity. This has important theoretical implications by demonstrating an upper bound on the complexity of the matching problem, an by offering insight into the nature of the matching problem itself. These insights prove useful in the solution to the matching problem in higher dimensional cases as well, such as matching three-dimensional models to either two or three-dimensional sensory data. The approach is based on an analysis of the space of feasible transformation parameters. This paper outlines the mathematical basis for the method, and describes the implementation of an algorithm for the procedure. Experiments demonstrating the method are reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the principles-and-parameters model of language, the principle known as "free indexation'' plays an important part in determining the referential properties of elements such as anaphors and pronominals. This paper addresses two issues. (1) We investigate the combinatorics of free indexation. In particular, we show that free indexation must produce an exponential number of referentially distinct structures. (2) We introduce a compositional free indexation algorithm. We prove that the algorithm is "optimal.'' More precisely, by relating the compositional structure of the formulation to the combinatorial analysis, we show that the algorithm enumerates precisely all possible indexings, without duplicates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Early and intermediate vision algorithms, such as smoothing and discontinuity detection, are often implemented on general-purpose serial, and more recently, parallel computers. Special-purpose hardware implementations of low-level vision algorithms may be needed to achieve real-time processing. This memo reviews and analyzes some hardware implementations of low-level vision algorithms. Two types of hardware implementations are considered: the digital signal processing chips of Ruetz (and Broderson) and the analog VLSI circuits of Carver Mead. The advantages and disadvantages of these two approaches for producing a general, real-time vision system are considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many current recognition systems use constrained search to locate objects in cluttered environments. Previous formal analysis has shown that the expected amount of search is quadratic in the number of model and data features, if all the data is known to come from a sinlge object, but is exponential when spurious data is included. If one can group the data into subsets likely to have come from a single object, then terminating the search once a "good enough" interpretation is found reduces the expected search to cubic. Without successful grouping, terminated search is still exponential. These results apply to finding instances of a known object in the data. In this paper, we turn to the problem of selecting models from a library, and examine the combinatorics of determining that a candidate object is not present in the data. We show that the expected search is again exponential, implying that naﶥ approaches to indexing are likely to carry an expensive overhead, since an exponential amount of work is needed to week out each of the incorrect models. The analytic results are shown to be in agreement with empirical data for cluttered object recognition.