906 resultados para computational cost
Resumo:
Embedded propulsion systems, such as for example used in advanced hybrid-wing body aircraft, can potentially offer major fuel burn and noise reduction benefits but introduce challenges in the aerodynamic and acoustic integration of the high-bypass ratio fan system. A novel approach is proposed to quantify the effects of non-uniform flow on the generation and propagation of multiple pure tone noise (MPTs). The new method is validated on a conventional inlet geometry first. The ultimate goal is to conduct a parametric study of S-duct inlets in order to quantify the effects of inlet design parameters on the acoustic signature. The key challenge is that the mechanism underlying the distortion transfer, noise source generation and propagation through the non-uniform flow field are inherently coupled such that a simultaneous computation of the aerodynamics and acoustics is required. The technical approach is based on a body force description of the fan blade row that is able to capture the distortion transfer and the MPT noise generation mechanisms while greatly reducing computational cost. A single, 3-D full-wheel unsteady CFD simulation, in which the Euler equations are solved to second-order spatial and temporal accuracy, simultaneously computes the MPT noise generation and its propagation in distorted mean flow. Several numerical tools were developed to enable the implementation of this new approach. Parametric studies were conducted to determine appropriate grid and time step sizes for the propagation of acoustic waves. The Ffowcs-Williams and Hawkings integral method is used to propagate the noise to far field receivers. Non-reflecting boundary conditions are implemented through the use of acoustic buffer zones. The body force modeling approach is validated and proof-of-concept studies demonstrate the generation of disturbances at both blade-passing and shaft-order frequencies using the perturbed body force method. The full methodology is currently being validated using NASA's Source Diagnostic Test (SDT) fan and inlet geometry. Copyright © 2009 by Jeff Defoe, Alex Narkaj & Zoltan Spakovszky.
Resumo:
The study of random dynamic systems usually requires the definition of an ensemble of structures and the solution of the eigenproblem for each member of the ensemble. If the process is carried out using a conventional numerical approach, the computational cost becomes prohibitive for complex systems. In this work, an alternative numerical method is proposed. The results for the response statistics are compared with values obtained from a detailed stochastic FE analysis of plates. The proposed method seems to capture the statistical behaviour of the response with a reduced computational cost.
Resumo:
We investigate the Student-t process as an alternative to the Gaussian process as a non-parametric prior over functions. We derive closed form expressions for the marginal likelihood and predictive distribution of a Student-t process, by integrating away an inverse Wishart process prior over the co-variance kernel of a Gaussian process model. We show surprising equivalences between different hierarchical Gaussian process models leading to Student-t processes, and derive a new sampling scheme for the inverse Wishart process, which helps elucidate these equivalences. Overall, we show that a Student-t process can retain the attractive properties of a Gaussian process - a nonparamet-ric representation, analytic marginal and predictive distributions, and easy model selection through covariance kernels - but has enhanced flexibility, and predictive covariances that, unlike a Gaussian process, explicitly depend on the values of training observations. We verify empirically that a Student-t process is especially useful in situations where there are changes in covariance structure, or in applications such as Bayesian optimization, where accurate predictive covariances are critical for good performance. These advantages come at no additional computational cost over Gaussian processes.
Resumo:
This paper addresses devising a reliable model-based Harmonic-Aware Matching Pursuit (HAMP) for reconstructing sparse harmonic signals from their compressed samples. The performance guarantees of HAMP are provided; they illustrate that the introduced HAMP requires less data measurements and has lower computational cost compared with other greedy techniques. The complexity of formulating a structured sparse approximation algorithm is highlighted and the inapplicability of the conventional thresholding operator to the harmonic signal model is demonstrated. The harmonic sequential deletion algorithm is subsequently proposed and other sparse approximation methods are evaluated. The superior performance of HAMP is depicted in the presented experiments. © 2013 IEEE.
Resumo:
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N ( 2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Resumo:
It is important for practical application to design an effective and efficient metric for video quality. The most reliable way is by subjective evaluation. Thus, to design an objective metric by simulating human visual system (HVS) is quite reasonable and available. In this paper, the video quality assessment metric based on visual perception is proposed. Three-dimensional wavelet is utilized to decompose video and then extract features to mimic the multichannel structure of HVS. Spatio-temporal contrast sensitivity function (S-T CSF) is employed to weight coefficient obtained by three-dimensional wavelet to simulate nonlinearity feature of the human eyes. Perceptual threshold is exploited to obtain visual sensitive coefficients after S-T CSF filtered. Visual sensitive coefficients are normalized representation and then visual sensitive errors are calculated between reference and distorted video. Finally, temporal perceptual mechanism is applied to count values of video quality for reducing computational cost. Experimental results prove the proposed method outperforms the most existing methods and is comparable to LHS and PVQM.
Resumo:
The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.
Resumo:
角点检测应用十分广泛,是许多计算机视觉任务的基础。本文提出了一种快速、高精度的角点检测算法,算法简单新颖,角点条件和角点响应函数设计独特。和以往不同的是:算法在设计上考虑的是角点的局部几何特征,使得处理的数据量大为减少,同时能够很好地保证检测精度等其他性能指标。通过和广泛使用的SUSAN算法、Harris算法在正确率、漏检、精度、抗噪声、计算复杂度等方面进行综合比较,结果表明该算法无论对人工合成图像还是对自然图像均具有良好的性能。
Resumo:
An high-resolution prestack imaging technique of seismic data is developed in this thesis. By using this technique, the reflected coefficients of sheet sands can be gained in order to understand and identify thin oil reservoirs. One-way wave equation based migration methods can more accurately model seismic wave propagation effect such as multi-arrivals and obtain almost correct reflected energy in the presence of complex inhomogeneous media, and therefore, achieve more superiorities in imaging complex structure. So it is a good choice to apply the proposed high-resolution imaging to the presatck depth migration gathers. But one of the main shorting of one-way wave equation based migration methods is the low computational efficiency, thus the improvement on computational efficiency is first carried out. The method to improve the computational efficiency of prestack depth migration is first presented in this thesis, that is frequency-dependent varying-step depth exploration scheme plus a table-driven, one-point wavefield interpolation technology for wave equation based migration methods; The frequency-dependent varying-step depth exploration scheme reduces the computational cost of wavefield depth extrapolation, and the a table-driven, one-point wavefield interpolation technology reconstructs the extrapolated wavefield with an equal, desired vertical step with high computational efficiency. The proposed varying-step depth extrapolation plus one-point interpolation scheme results in 2/3 reduction in computational cost when compared to the equal-step depth extrapolation of wavefield, but gives the almost same imaging. The frequency-dependent varying-step depth exploration scheme is presented in theory by using the optimum split-step Fourier. But the proposed scheme can also be used by other wave equation based migration methods of the frequency domain. The proposed method is demonstrated by using impulse response, 2-D Marmousi dataset, 3-D salt dataset and the 3-D field dataset. A method of high-resolution prestack imaging is presented in the 2nd part of this thesis. The seismic interference method to solve the relative reflected coefficients is presented. The high-resolution imaging is obtained by introducing a sparseness- constrained least-square inversion into the reflected coefficient imaging. Gaussian regularization is first imposed and a smoothed solution is obtained by solving equation derived from the least-square inversion. Then the Cauchy regularization is introducing to the least-square inversion , the sparse solution of relative reflected coefficients can be obtained, that is high-resolution solution. The proposed scheme can be used together with other prestack imaging if the higher resolution is needed in a target zone. The seismic interference method in theory and the solution to sparseness-constrained least-square inversion are presented. The proposed method is demonstrated by synthetic examples and filed data.
Resumo:
With the development of seismic exploration, the target becomes more and more complex, which leads to a higher demand for the accuracy and efficiency in 3D exploration. Fourier finite-difference (FFD) method is one of the most valuable methods in complex structure exploration, which keeps the ability of finite-differenc method in dealing with laterally varing media and inherits the predominance of the phase-screen method in stablility and efficiency. In this thesis, the accuracy of the FFD operator is highly improved by using simulated annealing algorithm. This method takes the extrapolation step and band width into account, which is more suitable to various band width and discrete scale than the commonely-used optimized method based on velocity contrast alone. In this thesis, the FFD method is extended to viscoacoustic modeling. Based on one-way wave equation, the presented method is implemented in frequency domain; thus, it is more efficient than two-way methods, and is more convenient than time domain methods in handling attenuation and dispersion effects. The proposed method can handle large velocity contrast and has a high efficiency, which is helpful to further research on earth absorption and seismic resolution. Starting from the frequency dispersion of the acoustic VTI wave equation, this thesis extends the FFD migration method to the acoustic VTI media. Compared with the convetional FFD method, the presented method has a similar computational efficiency, and keeps the abilities of dealing with large velocity contrasts and steep dips. The numerical experiments based on the SEG salt model show that the presented method is a practical migration method for complex acoustical VTI media, because it can handle both large velocity contrasts and large anisotropy variations, and its accuracy is relatively high even in strong anisotropic media. In 3D case, the two-way splitting technique of FFD operator causes artificial azimuthal anisotropy. These artifacts become apparent with increasing dip angles and velocity contrasts, which prevent the application of the FFD method in 3D complex media. The current methods proposed to reduce the azimuthal anisotropy significantly increase the computational cost. In this thesis, the alternating-direction-implicit plus interpolation scheme is incorporated into the 3D FFD method to reduce the azimuthal anisotropy. By subtly utilizing the Fourier based scheme of the FFD method, the improved fast algorithm takes approximately no extra computation time. The resulting operator keeps both the accuracy and the efficiency of the FFD method, which is helpful to the inhancements of both the accuracy and the efficiency for prestack depth migration. The general comparison is presented between the FFD operator and the generalized-screen operator, which is valuable to choose the suitable method in practice. The percentage relative error curves and migration impulse responses show that the generalized-screen operator is much sensiutive to the velocity contrasts than the FFD operator. The FFD operator can handle various velocity contrasts, while the generalized-screen operator can only handle some range of the velocity contrasts. Both in large and weak velocity contrasts, the higher order term of the generalized-screen operator has little effect on improving accuracy. The FFD operator is more suitable to large velocity contrasts, while the generalized-screen operator is more suitable to middle velocity contrasts. Both the one-way implicit finite-difference migration and the two-way explicit finite-differenc modeling have been implemented, and then they are compared with the corresponding FFD methods respectively. This work gives a reference to the choosen of proper method. The FFD migration is illustrated to be more attractive in accuracy, efficiency and frequency dispertion than the widely-used implicit finite-difference migration. The FFD modeling can handle relatively coarse grids than the commonly-used explicit finite-differenc modeling, thus it is much faster in 3D modeling, especially for large-scale complex media.
Resumo:
Datuming which has not been well solved in complex areas is a long-existing problem in seismic processing and imaging. Theoretically, Wave-equation datuming(WED) works well in the areas with substantial surface topography and areas of complex velocity structure. However, many difficulties still exist in practice. There are three main reasons: (1) It’s difficult to obtain the velocity model. (2) The computational cost is high and the efficiency is low. (3) Reflection waveform distortions are introduced by low S/N ratio in seismic data. The second and third problems are involved in the paper. To improve computational efficiency, DP1 proposed by Fu Li-Yun is applied in WED. Some quantitative and semi-quantitative conclusions of assessing the computational accuracy and efficiency have been obtained by comparing the adaptation of three operators( PS, SSF, DP1) to the surface topography and the lateral velocity variation. Moreover, the impacts of near surface scattering associated with complex surface topography on WED is analyzed theoretically. According to the analysis results, the following conclusions have been obtained. WED is stable and effective when the field data has high S/N ratio and velocity model is accurate. However ,it doesn’t work well when S/N ratio of field data is low. So denoising techniques in process of WED is important for low S/N data. The paper presents the theoretical analysis for the issues facing WED, which is expected to provide a useful reference to the further development of this technology.
Resumo:
On the subject of oil and gas exploration, migration is an efficacious technique for imagining structures underground. Wave-equation migration (WEM) dominates over other migration methods in accuracy, despite of higher computational cost. However, the advantages of WEM will emerge as the progress of computer technology. WEM is sensitive to velocity model more than others. Small velocity perturbations result in grate divergence in the image pad. Currently, Kirrchhoff method is still very popular in the exploration industry for the reason of difficult to provide precise velocity model. It is very urgent to figure out a way to migration velocity modeling. This dissertation is mainly devoted to migration velocity analysis method for WEM: 1. In this dissertation, we cataloged wave equation prestack depth migration. The concept of migration is introduced. Then, the analysis is applied to different kinds of extrapolate operator to demonstrate their accuracy and applicability. We derived the DSR and SSR migration method and apply both to 2D model. 2. The output of prestack WEM is in form of common image gathers (CIGs). Angle domain common image gathers (ADCIGs) gained by wave equation are proved to be free of artifacts. They are also the most potential candidates for migration velocity analysis. We discussed how to get ADCIGs by DSR and SSR, and obtained ADCIGs before and after imaging separately. The quality of post stack image is affected by CIGs, only the focused or flattened CIGs generate the correct image. Based on wave equation migration, image could be enhanced by special measures. In this dissertation we use both prestack depth residual migration and time shift imaging condition to improve the image quality. 3. Inaccurate velocities lead to errors of imaging depth and curvature of coherent events in CIGs. The ultimate goal of migration velocity analysis (MVA) is to focus scattered event to correct depth and flatten curving event by updating velocities. The kinematic figures are implicitly presented by focus depth aberration and kinetic figure by amplitude. The initial model of Wave-equation migration velocity analysis (WEMVA) is the output of RMO velocity analysis. For integrity of MVA, we review RMO method in this dissertation. The dissertation discusses the general ideal of RMO velocity analysis for flat and dipping events and the corresponding velocity update formula. Migration velocity analysis is a very time consuming work. Respect to computational convenience, we discus how RMO works for synthetic source record migration. In some extremely situation, RMO method fails. Especially in the areas of poorly illuminated or steep structure, it is very difficult to obtain enough angle information for RMO. WEMVA based on wave extrapolate theory, which successfully overcome the drawback of ray based methods. WEMVA inverses residual velocities with residual images. Based on migration regression, we studied the linearized scattering operator and linearized residual image. The key to WEMVA is the linearized residual image. Residual image obtained by Prestack residual migration, which based on DSR is very inefficient. In this dissertation, we proposed obtaining residual migration by time shift image condition, so that, WEMVA could be implemented by SSR. It evidently reduce the computational cost for this method.
Resumo:
Seismic exploration is the main method of seeking oil and gas. With the development of seismic exploration, the target becomes more and more complex, which leads to a higher demand for the accuracy and efficiency in seismic exploration. Fourier finite-difference (FFD) method is one of the most valuable methods in complex structure exploration, which has obtained good effect. However, in complex media with wider angles, the effect of FFD method is not satisfactory. Based on the FFD operator, we extend the two coefficients to be optimized to four coefficients, then optimize them globally using simulated annealing algorithm. Our optimization method select the solution of one-way wave equation as the objective function. Except the velocity contrast, we consider the effects of both frequency and depth interval. The proposed method can improve the angle of FFD method without additional computation time, which can reach 75° in complex media with large lateral velocity contrasts and wider propagation angles. In this thesis, combinating the FFD method and alternative-direction-implicit plus interpolation(ADIPI) method, we obtain 3D FFD with higher accuracy. On the premise of keeping the efficiency of the FFD method, this method not only removes the azimuthal anisotropy but also optimizes the FFD mehod, which is helpful to 3D seismic exploration. We use the multi-parameter global optimization method to optimize the high order term of FFD method. Using lower-order equation to obtain the approximation effect of higher-order equation, not only decreases the computational cost result from higher-order term, but also obviously improves the accuracy of FFD method. We compare the FFD, SAFFD(multi-parameter simulated annealing globally optimized FFD), PFFD, phase-shift method(PS), globally optimized FFD (GOFFD), and higher-order term optimized FFD method. The theoretical analyses and the impulse responses demonstrate that higher-order term optimized FFD method significantly extends the accurate propagation angle of the FFD method, which is useful to complex media with wider propagation angles.
Resumo:
The seismic survey is the most effective prospecting geophysical method during exploration and development of oil/gas. The structure and the lithology of the geological body become increasingly complex now. So it must assure that the seismic section own upper resolution if we need accurately describe the targets. High signal/noise ratio is the precondition of high-resolution. For the sake of improving signal/noise ratio, we put forward four methods for eliminating random noise on the basis of detailed analysis of the technique for noise elimination using prediction filtering in f-x-y domain. The four methods are put forward for settling different problems, which are in the technique for noise elimination using prediction filtering in f-x-y domain. For weak noise and large filters, the response of the noise to the filter is little. For strong noise and short filters, the response of the noise to the filter is important. For the response of the noise, the predicting operators are inaccurate. The inaccurate operators result in incorrect results. So we put forward the method using prediction filtering by inversion in f-x-y domain. The method makes the assumption that the seismic signal comprises predictable proportion and unpredictable proportion. The transcendental information about predicting operator is introduced in the function. The method eliminates the response of the noise to filtering operator, and assures that the filtering operators are accurate. The filtering results are effectively improved by the method. When the dip of the stratum is very complex, we generally divide the data into rectangular patches in order to obtain the predicting operators using prediction filtering in f-x-y domain. These patches usually need to have significant overlap in order to get a good result. The overlap causes that the data is repeatedly used. It effectively increases the size of the data. The computational cost increases with the size of the data. The computational efficiency is depressed. The predicting operators, which are obtained by general prediction filtering in f-x-y domain, can not describe the change of the dip when the dip of the stratum is very complex. It causes that the filtering results are aliased. And each patch is an independent problem. In order to settle these problems, we put forward the method for eliminating noise using space varying prediction filtering in f-x-y domain. The predicting operators accordingly change with space varying in this method. Therefore it eliminates the false event in the result. The transcendental information about predicting operator is introduced into the function. To obtain the predicting operators of each patch is no longer independent problem, but related problem. Thus it avoids that the data is repeatedly used, and improves computational efficiency. The random noise that is eliminated by prediction filtering in f-x-y domain is Gaussian noise. The general method can't effectively eliminate non-Gaussian noise. The prediction filtering method using lp norm (especially p=l) can effectively eliminate non-Gaussian noise in f-x-y domain. The method is described in this paper. Considering the dip of stratum can be accurately obtained, we put forward the method for eliminating noise using prediction filtering under the restriction of the dip in f-x-y domain. The method can effectively increase computational efficiency and improve the result. Through calculating in the theoretic model and applying it to the field data, it is proved that the four methods in this paper can effectively solve these different problems in the general method. Their practicability is very better. And the effect is very obvious.
Resumo:
We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.