966 resultados para Conjugate gradient methods


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numerical Linear Algebra (NLA) kernels are at the heart of all computational problems. These kernels require hardware acceleration for increased throughput. NLA Solvers for dense and sparse matrices differ in the way the matrices are stored and operated upon although they exhibit similar computational properties. While ASIC solutions for NLA Solvers can deliver high performance, they are not scalable, and hence are not commercially viable. In this paper, we show how NLA kernels can be accelerated on REDEFINE, a scalable runtime reconfigurable hardware platform. Compared to a software implementation, Direct Solver (Modified Faddeev's algorithm) on REDEFINE shows a 29X improvement on an average and Iterative Solver (Conjugate Gradient algorithm) shows a 15-20% improvement. We further show that solution on REDEFINE is scalable over larger problem sizes without any notable degradation in performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diffuse optical tomography (DOT) using near-infrared (NIR) light is a promising tool for noninvasive imaging of deep tissue. This technique is capable of quantitative reconstructions of absorption coefficient inhomogeneities of tissue. The motivation for reconstructing the optical property variation is that it, and, in particular, the absorption coefficient variation, can be used to diagnose different metabolic and disease states of tissue. In DOT, like any other medical imaging modality, the aim is to produce a reconstruction with good spatial resolution and accuracy from noisy measurements. We study the performance of a phase array system for detection of optical inhomogeneities in tissue. The light transport through a tissue is diffusive in nature and can be modeled using diffusion equation if the optical parameters of the inhomogeneity are close to the optical properties of the background. The amplitude cancellation method that uses dual out-of-phase sources (phase array) can detect and locate small objects in turbid medium. The inverse problem is solved using model based iterative image reconstruction. Diffusion equation is solved using finite element method for providing the forward model for photon transport. The solution of the forward problem is used for computing the Jacobian and the simultaneous equation is solved using conjugate gradient search. The simulation studies have been carried out and the results show that a phase array system can resolve inhomogeneities with sizes of 5 mm when the absorption coefficient of the inhomogeneity is twice that of the background tissue. To validate this result, a prototype model for performing a dual-source system has been developed. Experiments are carried out by inserting an inhomogeneity of high optical absorption coefficient in an otherwise homogeneous phantom while keeping the scattering coefficient same. The high frequency (100 MHz) modulated dual out-of-phase laser source light is propagated through the phantom. The interference of these sources creates an amplitude null and a phase shift of 180° along a plane between the two sources with a homogeneous object. A solid resin phantom with inhomogeneities simulating the tumor is used in our experiment. The amplitude and phase changes are found to be disturbed by the presence of the inhomogeneity in the object. The experimental data (amplitude and the phase measured at the detector) are used for reconstruction. The results show that the method is able to detect multiple inhomogeneities with sizes of 4 mm. The localization error for a 5 mm inhomogeneity is found to be approximately 1 mm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The multiphase flow of fluids in the unsaturated porous medium is considered as a three phase flow of water, NAPL, and air simultaneously in the porous medium. The adaptive solution fully implicit modified sequential method is used for the numerical modelling. The effect of capillarity and heterogeneity effect at the interface between the media is studied and it is observed that the interface criteria has to be taken into account for the correct prediction of NAPL migration especially in heterogeneous media. The modified Newton Raphson method is used for the linearization and Hestines and Steifel Conjugate Gradient method is used as the solver.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artificial Neural Networks (ANNs) have been found to be a robust tool to model many non-linear hydrological processes. The present study aims at evaluating the performance of ANN in simulating and predicting ground water levels in the uplands of a tropical coastal riparian wetland. The study involves comparison of two network architectures, Feed Forward Neural Network (FFNN) and Recurrent Neural Network (RNN) trained under five algorithms namely Levenberg Marquardt algorithm, Resilient Back propagation algorithm, BFGS Quasi Newton algorithm, Scaled Conjugate Gradient algorithm, and Fletcher Reeves Conjugate Gradient algorithm by simulating the water levels in a well in the study area. The study is analyzed in two cases-one with four inputs to the networks and two with eight inputs to the networks. The two networks-five algorithms in both the cases are compared to determine the best performing combination that could simulate and predict the process satisfactorily. Ad Hoc (Trial and Error) method is followed in optimizing network structure in all cases. On the whole, it is noticed from the results that the Artificial Neural Networks have simulated and predicted the water levels in the well with fair accuracy. This is evident from low values of Normalized Root Mean Square Error and Relative Root Mean Square Error and high values of Nash-Sutcliffe Efficiency Index and Correlation Coefficient (which are taken as the performance measures to calibrate the networks) calculated after the analysis. On comparison of ground water levels predicted with those at the observation well, FFNN trained with Fletcher Reeves Conjugate Gradient algorithm taken four inputs has outperformed all other combinations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A hybrid finite difference method and vortex method (HDV), which is based on domain decomposition and proposed by the authors (1992), is improved by using a modified incomplete LU decomposition conjugate gradient method (MILU-CG), and a high order implicit difference algorithm. The flow around a rotating circular cylinder at Reynolds number R-e = 1000, 200 and the angular to rectilinear speed ratio alpha is an element of (0.5, 3.25) is studied numerically. The long-time full developed features about the variations of the vortex patterns in the wake, and drag, lift forces on the cylinder are given. The calculated streamline contours agreed well with the experimental visualized flow pictures. The existence of critical states and the vortex patterns at the states are given for the first time. The maximum lift to drag force ratio can be obtained nearby the critical states.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new compact finite difference-Fourier spectral hybrid method for solving the three dimensional incompressible Navier-Stokes equations is developed in the present paper. The fifth-order upwind compact finite difference schemes for the nonlinear convection terms in the physical space, and the sixth-order center compact schemes for the derivatives in spectral space are described, respectively. The fourth-order compact schemes in a single nine-point cell for solving the Helmholtz equations satisfied by the velocities and pressure in spectral space is derived and its preconditioned conjugate gradient iteration method is studied. The treatment of pressure boundary conditions and the three dimensional non-reflecting outflow boundary conditions are presented. Application to the vortex dislocation evolution in a three dimensional wake is also reported.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Amazônia exibe uma variedade de cenários que se complementam. Parte desse ecossistema sofre anualmente severas alterações em seu ciclo hidrológico, fazendo com que vastos trechos de floresta sejam inundados. Esse fenômeno, entretanto, é extremamente importante para a manutenção de ciclos naturais. Neste contexto, compreender a dinâmica das áreas alagáveis amazônicas é importante para antecipar o efeito de ações não sustentáveis. Sob esta motivação, este trabalho estuda um modelo de escoamento em áreas alagáveis amazônicas, baseado nas equações de Navier-Stokes, além de ferramentas que possam ser aplicadas ao modelo, favorecendo uma nova abordagem do problema. Para a discretização das equações é utilizado o Método dos Volumes Finitos, sendo o Método do Gradiente Conjugado a técnica escolhida para resolver os sistemas lineares associados. Como técnica de resolução numérica das equações, empregou-se o Método Marker and Cell, procedimento explícito para solução das equações de Navier-Stokes. Por fim, as técnicas são aplicadas a simulações preliminares utilizando a estrutura de dados Autonomous Leaves Graph, que tem recursos adaptativos para manipulação da malha que representa o domínio do problema

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To extend the cross-hole seismic 2D data to outside 3D seismic data, reconstructing the low frequency data to high frequency data is necessary. Blind deconvolution method is a key technology. In this paper, an implementation of Blind deconvolution is introduced. And optimized precondition conjugate gradient method is used to improve the stability of the algorithm and reduce the computation. Then high-frequency retrieved Seismic data and the cross-hole seismic data is combined for constraint inversion. Real data processing proved the method is effective. To solve the problem that the seismic data resolution can’t meet the request of reservoir prediction in the river face thin-layers in Chinese eastern oil fields, a high frequency data reconstruction method is proposed. The extrema of the seismic data are used to get the modulation function which operated with the original seismic data to get the high frequency part of the reconstruction data to rebuild the wide band data. This method greatly saves the computation, and easy to adjust the parameters. In the output profile, the original features of the seismic events are kept, the common feint that breaking the events and adding new zeros to produce alias is avoided. And the interbeded details are enhanced compared to the original profiles. The effective band of seismic data is expended and the method is approved by the processing of the field data. Aim to the problem in the exploration and development of Chinese eastern oil field that the high frequency log data and the relative low frequency seismic data can’t be merged, a workflow of log data extrapolation constrained by time-phase model based on local wave decomposition is raised. The seismic instantaneous phase is resolved by local wave decomposition to build time-phase model, the layers beside the well is matched to build the relation of log and seismic data, multiple log info is extrapolated constrained by seismic equiphase map, high precision attributes inverse sections are produced. In the course of resolve the instantaneous phase, a new method of local wave decomposition --Hilbert transform mean mode decomposition(HMMD) is raised to improve the computation speed and noise immunity. The method is applied in the high resolution reservoir prediction in Mao2 survey of Daqing oil field, Multiple attributes profiles of wave impedance, gamma-ray, electrical resistivity, sand membership degree are produced, of which the resolution is high and the horizontal continuous is good. It’s proved to be a effective method for reservoir prediction and estimation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Impedance inversion is very important in seismic technology. It is based on seismic profile. Good inversion result is derived from high quality seismic profile, which is formed using high resolution imaging resolution. High-resolution process demands that signal/noise ratio is high. It is very important for seismic inversion to improve signal/noise ratio. the main idea is that the physical parameter (wave impedance), which describes the stratigraphy directly, is achieved from seismic data expressing structural style indirectly. The solution of impedance inversion technology, which is based on convolution model, is arbitrary. It is a good way to apply the priori information as the restricted condition in inversion. An updated impedance inversion technology is presented which overcome the flaw of traditional model and highlight the influence of structure. Considering impedance inversion restricted by sedimentary model, layer filling style and congruence relation, the impedance model is built. So the impedance inversion restricted by geological rule could be realized. there are some innovations in this dissertation: 1. The best migration aperture is achieved from the included angle of time surface of diffracted wave and reflected wave. Restricted by structural model, the dip of time surface of reflected wave and diffracted wave is given. 2. The conventional method of FXY forcasting noise is updated, and the signal/noise ratio is improved. 3. Considering the characteristic of probability distribution of seismic data and geological events fully, an object function is constructed using the theory of Bayes estimation as the criterion. The mathematics is used here to describe the content of practice theory. 4. Considering the influence of structure, the seismic profile is interpreted to build the model of structure. A series of structure model is built. So as the impedance model. The high frequency of inversion is controlled by the geological rule. 5. Conjugate gradient method is selected to improve resolving process for it fit the demands of geophysics, and the efficiency of algorithm is enhanced. As the geological information is used fully, the result of impedance inversion is reasonable and complex reservoir could be forecasted further perfectly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As the largest and highest plateau on the Earth, the Tibetan Plateau has been a key location for understanding the processes of mountain building and plateau formation during India-Asia continent-continent collision. As the front-end of the collision, the geological structure of eastern Tibetan Plateau is very complex. It is ideal as a natural laboratory for investigating the formation and evolution of the Tibetan Plateau. Institute of Geophysics, Chinese Academy of Sciences (CAS) carried out MT survey from XiaZayii to Qingshuihe in the east part of the plateau in 1998. After error analysis and distortion analysis, the Non-linear Conjugate Gradient inversion(NLCG), Rapid Relaxation Inversin (RRI) and 2D OCCAM Inversion algorithms were used to invert the data. The three models obtained from 3 algorithms provided similar electrical structure and the NLCG model fit the observed data better than the other two models. According to the analysis of skin depth, the exploration depth of MT in Tibet is much more shallow than in stable continent. For example, the Schmucker depth at period 100s is less than 50km in Tibet, but more than 100km in Canadian Shield. There is a high conductivity layer at the depth of several kilometers beneath middle Qiangtang terrane, and almost 30 kilometers beneath northern Qiangtang terrane. The sensitivity analysis of the data predicates that the depth and resistivity of the crustal high conductivity layer are reliable. The MT results provide a high conductivity layer at 20~40km depth, where the seismic data show a low velocity zone. The experiments show that the rock will dehydrate and partially melt in the relative temperature and pressure. Fluids originated from dehydration and partial melting will seriously change rheological characteristics of rock. Therefore, This layer with low velocity and high conductivity layer in the crust is a weak layer. There is a low velocity path at the depth of 90-110 km beneath southeastern Tibetan Plateau and adjacent areas from seismology results. The analysis on the temperature and rheological property of the lithosphere show that the low velocity path is also weak. GPS measurements and the numerical simulation of the crust-mantle deformation show that the movement rate is different for different terranes. The regional strike derived from decomposition analysis for different frequency band and seismic anisotropy indicate that the crust and upper mantle move separately instead of as a whole. There are material flow in the eastern and southeastern Tibetan Plateau. Therefore, the faults, the crustal and upper mantle weak layers are three different boundaries for relatively movement. Those results support the "two layer wedge plates" geodynamic model on Tibetan formation and evolution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The content of this paper is based on the research work while the author took part in the key project of NSFC and the key project of Knowledge Innovation of CAS. The whole paper is expanded by introduction of the inevitable boundary problem during seismic migration and inversion. Boundary problem is a popular issue in seismic data processing. At the presence of artificial boundary, reflected wave which does not exist in reality comes to presence when the incident seismic wave arrives at the artificial boundary. That will interfere the propagation of seismic wave and cause alias information on the processed profile. Furthermore, the quality of the whole seismic profile will decrease and the subsequent work will fail.This paper has also made a review on the development of seismic migration, expatiated temporary seismic migration status and predicted the possible break through. Aiming at the absorbing boundary problem in migration, we have deduced the wide angle absorbing boundary condition and made a compare with the boundary effect of Toepiitz matrix fast approximate computation.During the process of fast approximate inversion computation of Toepiitz system, we have introduced the pre-conditioned conjugate gradient method employing co circulant extension to construct pre-conditioned matrix. Especially, employment of combined preconditioner will reduce the boundary effect during computation.Comparing the boundary problem in seismic migration with that in Toepiitz matrix inversion we find that the change of boundary condition will lead to the change of coefficient matrix eigenvalues and the change of coefficient matrix eigenvalues will cause boundary effect. In this paper, the author has made an qualitative analysis of the relationship between the coefficient matrix eigenvalues and the boundary effect. Quantitative analysis is worthy of further research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A learning based framework is proposed for estimating human body pose from a single image. Given a differentiable function that maps from pose space to image feature space, the goal is to invert the process: estimate the pose given only image features. The inversion is an ill-posed problem as the inverse mapping is a one to many process. Hence multiple solutions exist, and it is desirable to restrict the solution space to a smaller subset of feasible solutions. For example, not all human body poses are feasible due to anthropometric constraints. Since the space of feasible solutions may not admit a closed form description, the proposed framework seeks to exploit machine learning techniques to learn an approximation that is smoothly parameterized over such a space. One such technique is Gaussian Process Latent Variable Modelling. Scaled conjugate gradient is then used find the best matching pose in the space of feasible solutions when given an input image. The formulation allows easy incorporation of various constraints, e.g. temporal consistency and anthropometric constraints. The performance of the proposed approach is evaluated in the task of upper-body pose estimation from silhouettes and compared with the Specialized Mapping Architecture. The estimation accuracy of the Specialized Mapping Architecture is at least one standard deviation worse than the proposed approach in the experiments with synthetic data. In experiments with real video of humans performing gestures, the proposed approach produces qualitatively better estimation results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The parallelization of existing/industrial electromagnetic software using the bulk synchronous parallel (BSP) computation model is presented. The software employs the finite element method with a preconditioned conjugate gradient-type solution for the resulting linear systems of equations. A geometric mesh-partitioning approach is applied within the BSP framework for the assembly and solution phases of the finite element computation. This is combined with a nongeometric, data-driven parallel quadrature procedure for the evaluation of right-hand-side terms in applications involving coil fields. A similar parallel decomposition is applied to the parallel calculation of electron beam trajectories required for the design of tube devices. The BSP parallelization approach adopted is fully portable, conceptually simple, and cost-effective, and it can be applied to a wide range of finite element applications not necessarily related to electromagnetics.