982 resultados para ASYMPTOTIC BETHE-ANSATZ


Relevância:

10.00% 10.00%

Publicador:

Resumo:

针对具有有界时延和数据包丢失的网络控制系统,提出了一种新的稳定性判据.基于Lyapunov方法和图论理论,给出非线性离散和连续网络控制系统渐近稳定的充分条件,获得保持这两类系统稳定的最大允许时延界,得到控制器设计方法.并且,利用区间矩阵的谱特征,给出网络控制系统区间稳定的充分条件.设计算法,获得比例积分反馈控制器增益.算例表明所提方法的有效性。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Attaining sufficient accuracy and efficiency of generalized screen propagator and improving the quality of input gathers are often problems of wave equation presack depth migration, in this paper,a high order formula of generalized screen propagator for one-way wave equation is proposed by using the asymptotic expansion of single-square-root operator. Based on the formula,a new generalized screen propagator is developed ,which is composed of split-step Fourier propagator and high order correction terms,the new generalized screen propagator not only improving calculation precision without sharply increasing the quantity of computation,facilitates the suitability of generalized screen propagator to the media with strong lateral velocity variation. As wave-equation prestack depth migration is sensitive to the quality of input gathers, which greatly affect the output,and the available seismic data processing system has inability to obtain traveltimes corresponding to the multiple arrivals, to estimate of great residual statics, to merge seismic datum from different projects and to design inverse Q filter, we establish difference equations with an embodiment of Huygens’s principle for obtaining traveltimes corresponding to the multiple arrivals,bring forward a time variable matching filter for seismic datum merging by using the fast algorithm called Mallat tree for wavelet transformations, put forward a method for estimation of residual statics by applying the optimum model parameters estimated by iterative inversion with three organized algorithm,i.e,the CMP intertrace cross-correlation algorithm,the Laplacian image edge extraction algorithm,and the DFP algorithm, and present phase-shift inverse Q filter based on Futterman’s amplitude and phase-velocity dispersion formula and wave field extrapolation theory. All of their numerical and real data calculating results shows that our theory and method are practical and efficient. Key words: prestack depth migration, generalized screen propagator, residual statics,inverse Q filter ,traveltime,3D seismic datum mergence

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a fast and effective method for approximate calculation of seismic numerical simulation, ray tracing method, which has important theory and practical application value, in terms of seismic theory and seismic simulation, inversion, migration, imaging, simplified from seismic theory according to geometric seismic, means that the main energy of seismic wave field propagates along ray paths in condition of high-frequency asymptotic approximation. Calculation of ray paths and traveltimes is one of key steps in seismic simulation, inversion, migration, and imaging. Integrated triangular grids layout on wavefront with wavefront reconstruction ray tracing method, the thesis puts forward wavefront reconstruction ray tracing method based on triangular grids layout on wavefront, achieves accurate and fast calculation of ray paths and traveltimes. This method has stable and reasonable ray distribution, and overcomes problems caused by shadows in conventional ray tracing methods. The application of triangular grids layout on wavefront, keeps all the triangular grids stable, and makes the division of grids and interpolation of a new ray convenient. This technology reduces grids and memory, and then improves calculation efficiency. It enhances calculation accuracy by accurate and effective description and division on wavefront. Ray tracing traveltime table, which shares the character of 2-D or 3-D scatter data, has great amount of data points in process of seismic simulation, inversion, migration, and imaging. Therefore the traveltime table file will be frequently read, and the calculation efficiency is very low. Due to these reasons, reasonable traveltime table compression will be very necessary. This thesis proposes surface fitting and scattered data compression with B-spline function method, applies to 2-D and 3-D traveltime table compression. In order to compress 2-D (3-D) traveltime table, first we need construct a smallest rectangular (cuboidal) region with regular grids to cover all the traveltime data points, through the coordinate range of them in 2-D surface (3-D space). Then the value of finite regular grids, which are stored in memory, can be calculated using least square method. The traveltime table can be decompressed when necessary, according to liner interpolation method of 2-D (3-D) B-spline function. In the above calculation, the coefficient matrix is stored using sparse method and the liner system equations are solved using LU decomposition based on the multi-frontal method according to the sparse character of the least square method matrix. This method is practiced successfully in several models, and the cubic B-spline function can be the best basal function for surface fitting. It make the construction surface smooth, has stable and effective compression with high approximate accuracy using regular grids. In this way, through constructing reasonable regular grids to insure the calculation efficiency and accuracy of compression and surface fitting, we achieved the aim of traveltime table compression. This greatly improves calculation efficiency in process of seismic simulation, inversion, migration, and imaging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using the approximate high-frequency asymptotic methods to solve the scalar wave equation, we can get the eikonal equation and transport equation. Solving the eikonal equation by the method of characteristics provides a mathematical derivation of ray tracing equations. So, the ray tracing system is folly based on the approximate high-frequency asymptotic methods. If the eikonal is complex, more strictly, the eikonal is real value at the rays and complex outside rays, we can derive the Gaussian beam. This article mainly concentrates on the theory of Gaussian beam. To classical ray tracing theory, the Gaussina beam method (GBM) has many advantages. First, rays are no longer required to stop at the exact position of the receivers; thus time-consuming two-point ray tracing can be avoided. Second, the GBM yields stable results in regions of the wavefield where the standard ray theory fails (e.g., caustics, shadows zones and critical distance). Third, unlike seismograms computed by conventional ray tracing techniques, the GBM synthetic data are less influenced by minor details in the model representation. Here, I realize kinematical and dynamical system, and based on this, realize the GBM. Also, I give some mathematical examples. From these examples, we can find the importance and feasibility of the ray tracing system. Besides, I've studied about the reflection coefficient of inhomogeneous S-electromagnetic wave at the interface of conductive media. Basing on the difference of directions of phase shift constant and attenuation constant when the electromagnetic wave propagates in conductive medium, and using the boundary conditions of electromagnetic wave at the interface of conductive media, we derive the reflection coefficient of inhomogeneous S-electromagnetic wave, and draw the curves of it. The curves show that the quasi total reflection will occur when the electromagnetic wave incident from the medium with greater conductivity to the medium with smaller conductivity. There are two peak, values at the points of the critical angles of phase shift constant and attenuation constant, and the reflection coefficient is smaller than 1. This conclusion is different from that of total reflection light obviously.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gaussian beam is the asymptotic solution of wave equation concentred at the central ray. The Gaussian beam ray tracing method has many advantages over ray tracing method. Because of the prevalence of multipath and caustics in complex media, Kirchhoff migration usually can not get satisfactory images, but Gaussian beam migration can get better results.The Runge-Kutta method is used to carry out the raytracing, and the wavefront construction method is used to calculate the multipath wavefield. In this thesis, a new method to determine the starting point and initial direction of a new ray is proposed take advantage of the radius of curvature calculated by dynamic ray tracing method.The propagation characters of Gaussian beam in complex media are investigated. When Gaussian beam is used to calculate the Green function, the wave field near the source was decomposed in Gaussian beam in different direction, then the wave field at a point is the superposition of individual Gaussian beams.Migration aperture is the key factor for Kirchhoff migration. In this thesis, the criterion for the choice of optimum aperture is discussed taking advantage of stationary phase analysis. Two equivalent methods are proposed, but the second is more preferable.Gaussian beam migration based on dip scanning and its procedure are developed. Take advantage of the travel time, amplitude, and takeoff angle calculated by Gaussian beam method, the migration is accomplished.Using the proposed migration method, I carry out the numerical calculation of simple theoretical model, Marmousi model and field data, and compare the results with that of Kirchhoff migration. The comparison shows that the new Gaussian beam migration method can get a better result over Kirchhoff migration, with fewer migration noise and clearer image at complex structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the great puzzles in the psychology of visual perception is that the visual world appears to be a coherent whole despite our viewing it through temporally discontinuous series of eye fixations. The investigators attempted to explain this puzzle from the perspective of sequential visual information integration. In recent years, investigators hypothesized that information maintained in the visual short-term memory (VSTM) could become visual mental images gradually during time delay in visual buffer and integrated with information perceived currently. Some elementary studies had been carried out to investigate the integration between VSTM and visual percepts, but further research is required to account for several questions on the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information. Based on the theory of similarity between visual mental image and visual perception, this research (including three studies) employed the temporal integration paradigm and empty cell localization task to further explore the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information (sequential arrays). The purpose of study 1 was to further explore the temporal characteristics of sequential visual information integration by examining the effects of encoding time of sequential stimuli on the integration of sequential visual information. The purpose of study 2 was to further explore the spatial characteristics of sequential visual information integration by investigating the effects of spatial characteristics change on the integration of sequential visual information. The purpose of study 3 was to explore the information representation of information maintained in the VSTM and integration mechanism in the process of integrating sequential visual information by employing the behavioral experiments and eye tracking technology. The results indicated that: (1) Sequential arrays could be integrated without strategic instruction. Increasing the duration of the first array could cause improvement in performance and increasing the duration of the second array could not improve the performance. Temporal correlation model was not fit to explain the sequential array integration under long-ISI conditions. (2) Stimuli complexity influenced not only the overall performance of sequential arrays but also the values of ISI at asymptotic level of performance. Sequential arrays still could be integrated when the spatial characteristics of sequential arrays changed. During ISI, constructing and manipulating of visual mental image of array 1 were two separate processing phases. (3) During integrating sequential arrays, people represented the pattern constituted by the objects' image maintained in the VSTM and the topological characteristics of the objects' image had some impact on fixation location. The image-perception integration hypothesis was supported when the number of dots in array 1 was less than empty cells, and the convert-and-compare hypothesis was supported when the number of the dot in array 1 was equal to or more than empty cells. These findings not only contribute to make people understand the process of sequential visual information integration better, but also have significant practical application in the design of visual interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce and explore an approach to estimating statistical significance of classification accuracy, which is particularly useful in scientific applications of machine learning where high dimensionality of the data and the small number of training examples render most standard convergence bounds too loose to yield a meaningful guarantee of the generalization ability of the classifier. Instead, we estimate statistical significance of the observed classification accuracy, or the likelihood of observing such accuracy by chance due to spurious correlations of the high-dimensional data patterns with the class labels in the given training set. We adopt permutation testing, a non-parametric technique previously developed in classical statistics for hypothesis testing in the generative setting (i.e., comparing two probability distributions). We demonstrate the method on real examples from neuroimaging studies and DNA microarray analysis and suggest a theoretical analysis of the procedure that relates the asymptotic behavior of the test to the existing convergence bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

B.M. Brown, M.S.P. Eastham, I. Wood: Conditions for the spectrum associated with a leaky wire to contain the interval [? ?2/4, ?), Arch. Math., 90, 6 (2008), 554-558

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Iantchenko, A., (2007) 'Scattering poles near the real axis for two strictly convex obstacles', Annales of the Institute Henri Poincar? 8 pp.513-568 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gohm, Rolf; Kummerer, B.; Lang, T., (2006) 'Non-commutative symbolic coding', Ergodic Theory and Dynamical Systems 26(5) pp.1521-1548 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mishuris, Gennady; Movchan, N.V.; Movchan, A.B., (2006) 'Steady-state motion of a Mode-III crack on imperfect interfaces', Quarterly Journal of Mechanics and Applied Mathematics 59(4) pp.487-516 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyses the asymptotic properties of nonlinear least squares estimators of the long run parameters in a bivariate unbalanced cointegration framework. Unbalanced cointegration refers to the situation where the integration orders of the observables are different, but their corresponding balanced versions (with equal integration orders after filtering) are cointegrated in the usual sense. Within this setting, the long run linkage between the observables is driven by both the cointegrating parameter and the difference between the integration orders of the observables, which we consider to be unknown. Our results reveal three noticeable features. First, superconsistent (faster than √ n-consistent) estimators of the difference between memory parameters are achievable. Next, the joint limiting distribution of the estimators of both parameters is singular, and, finally, a modified version of the ‘‘Type II’’ fractional Brownian motion arises in the limiting theory. A Monte Carlo experiment and the discussion of an economic example are included.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The deposition of ultrasonic energy in tissue can cause tissue damage due to local heating. For pressures above a critical threshold, cavitation will occur in tissue and bubbles will be created. These oscillating bubbles can induce a much larger thermal energy deposition in the local region. Traditionally, clinicians and researchers have not exploited this bubble-enhanced heating since cavitation behavior is erratic and very difficult to control. The present work is an attempt to control and utilize this bubble-enhanced heating. First, by applying appropriate bubble dynamic models, limits on the asymptotic bubble size distribution are obtained for different driving pressures at 1 MHz. The size distributions are bounded by two thresholds: the bubble shape instability threshold and the rectified diffusion threshold. The growth rate of bubbles in this region is also given, and the resulting time evolution of the heating in a given insonation scenario is modeled. In addition, some experimental results have been obtained to investigate the bubble-enhanced heating in an agar and graphite based tissue- mimicking material. Heating as a function of dissolved gas concentrations in the tissue phantom is investigated. Bubble-based contrast agents are introduced to investigate the effect on the bubble-enhanced heating, and to control the initial bubble size distribution. The mechanisms of cavitation-related bubble heating are investigated, and a heating model is established using our understanding of the bubble dynamics. By fitting appropriate bubble densities in the ultrasound field, the peak temperature changes are simulated. The results for required bubble density are given. Finally, a simple bubbly liquid model is presented to estimate the shielding effects which may be important even for low void fraction during high intensity focused ultrasound (HIFU) treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sonic boom propagation in a quiet) stratified) lossy atmosphere is the subject of this dissertation. Two questions are considered in detail: (1) Does waveform freezing occur? (2) Are sonic booms shocks in steady state? Both assumptions have been invoked in the past to predict sonic boom waveforms at the ground. A very general form of the Burgers equation is derived and used as the model for the problem. The derivation begins with the basic conservation equations. The effects of nonlinearity) attenuation and dispersion due to multiple relaxations) viscosity) and heat conduction) geometrical spreading) and stratification of the medium are included. When the absorption and dispersion terms are neglected) an analytical solution is available. The analytical solution is used to answer the first question. Geometrical spreading and stratification of the medium are found to slow down the nonlinear distortion of finite-amplitude waves. In certain cases the distortion reaches an absolute limit) a phenomenon called waveform freezing. Judging by the maturity of the distortion mechanism, sonic booms generated by aircraft at 18 km altitude are not frozen when they reach the ground. On the other hand, judging by the approach of the waveform to its asymptotic shape, N waves generated by aircraft at 18 km altitude are frozen when they reach the ground. To answer the second question we solve the full Burgers equation and for this purpose develop a new computer code, THOR. The code is based on an algorithm by Lee and Hamilton (J. Acoust. Soc. Am. 97, 906-917, 1995) and has the novel feature that all its calculations are done in the time domain, including absorption and dispersion. Results from the code compare very well with analytical solutions. In a NASA exercise to compare sonic boom computer programs, THOR gave results that agree well with those of other participants and ran faster. We show that sonic booms are not steady state waves because they travel through a varying medium, suffer spreading, and fail to approximate step shocks closely enough. Although developed to predict sonic boom propagation, THOR can solve other problems for which the extended Burgers equation is a good propagation model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent work in sensor databases has focused extensively on distributed query problems, notably distributed computation of aggregates. Existing methods for computing aggregates broadcast queries to all sensors and use in-network aggregation of responses to minimize messaging costs. In this work, we focus on uniform random sampling across nodes, which can serve both as an alternative building block for aggregation and as an integral component of many other useful randomized algorithms. Prior to our work, the best existing proposals for uniform random sampling of sensors involve contacting all nodes in the network. We propose a practical method which is only approximately uniform, but contacts a number of sensors proportional to the diameter of the network instead of its size. The approximation achieved is tunably close to exact uniform sampling, and only relies on well-known existing primitives, namely geographic routing, distributed computation of Voronoi regions and von Neumann's rejection method. Ultimately, our sampling algorithm has the same worst-case asymptotic cost as routing a point-to-point message, and thus it is asymptotically optimal among request/reply-based sampling methods. We provide experimental results demonstrating the effectiveness of our algorithm on both synthetic and real sensor topologies.