49 resultados para Inverse computational method
em CentAUR: Central Archive University of Reading - UK
Resumo:
Finding the smallest eigenvalue of a given square matrix A of order n is computationally very intensive problem. The most popular method for this problem is the Inverse Power Method which uses LU-decomposition and forward and backward solving of the factored system at every iteration step. An alternative to this method is the Resolvent Monte Carlo method which uses representation of the resolvent matrix [I -qA](-m) as a series and then performs Monte Carlo iterations (random walks) on the elements of the matrix. This leads to great savings in computations, but the method has many restrictions and a very slow convergence. In this paper we propose a method that includes fast Monte Carlo procedure for finding the inverse matrix, refinement procedure to improve approximation of the inverse if necessary, and Monte Carlo power iterations to compute the smallest eigenvalue. We provide not only theoretical estimations about accuracy and convergence but also results from numerical tests performed on a number of test matrices.
Resumo:
Some points of the paper by N.K. Nichols (see ibid., vol.AC-31, p.643-5, 1986), concerning the robust pole assignment of linear multiinput systems, are clarified. It is stressed that the minimization of the condition number of the closed-loop eigenvector matrix does not necessarily lead to robustness of the pole assignment. It is shown why the computational method, which Nichols claims is robust, is in fact numerically unstable with respect to the determination of the gain matrix. In replying, Nichols presents arguments to support the choice of the conditioning of the closed-loop poles as a measure of robustness and to show that the methods of J Kautsky, N. K. Nichols and P. VanDooren (1985) are stable in the sense that they produce accurate solutions to well-conditioned problems.
Resumo:
In this paper a cell by cell anisotropic adaptive mesh technique is added to an existing staggered mesh Lagrange plus remap finite element ALE code for the solution of the Euler equations. The quadrilateral finite elements may be subdivided isotropically or anisotropically and a hierarchical data structure is employed. An efficient computational method is proposed, which only solves on the finest level of resolution that exists for each part of the domain with disjoint or hanging nodes being used at resolution transitions. The Lagrangian, equipotential mesh relaxation and advection (solution remapping) steps are generalised so that they may be applied on the dynamic mesh. It is shown that for a radial Sod problem and a two-dimensional Riemann problem the anisotropic adaptive mesh method runs over eight times faster.
Resumo:
Soil organic carbon (SOC) plays a vital role in ecosystem function, determining soil fertility, water holding capacity and susceptibility to land degradation. In addition, SOC is related to atmospheric CO, levels with soils having the potential for C release or sequestration, depending on land use, land management and climate. The United Nations Convention on Climate Change and its Kyoto Protocol, and other United Nations Conventions to Combat Desertification and on Biodiversity all recognize the importance of SOC and point to the need for quantification of SOC stocks and changes. An understanding of SOC stocks and changes at the national and regional scale is necessary to further our understanding of the global C cycle, to assess the responses of terrestrial ecosystems to climate change and to aid policy makers in making land use/management decisions. Several studies have considered SOC stocks at the plot scale, but these are site specific and of limited value in making inferences about larger areas. Some studies have used empirical methods to estimate SOC stocks and changes at the regional scale, but such studies are limited in their ability to project future changes, and most have been carried out using temperate data sets. The computational method outlined by the Intergovernmental Panel on Climate Change (IPCC) has been used to estimate SOC stock changes at the regional scale in several studies, including a recent study considering five contrasting eco regions. This 'one step' approach fails to account for the dynamic manner in which SOC changes are likely to occur following changes in land use and land management. A dynamic modelling approach allows estimates to be made in a manner that accounts for the underlying processes leading to SOC change. Ecosystem models, designed for site scale applications can be linked to spatial databases, giving spatially explicit results that allow geographic areas of change in SOC stocks to be identified. Some studies have used variations on this approach to estimate SOC stock changes at the sub-national and national scale for areas of the USA and Europe and at the watershed scale for areas of Mexico and Cuba. However, a need remained for a national and regional scale, spatially explicit system that is generically applicable and can be applied to as wide a range of soil types, climates and land uses as possible. The Global Environment Facility Soil Organic Carbon (GEFSOC) Modelling System was developed in response to this need. The GEFSOC system allows estimates of SOC stocks and changes to be made for diverse conditions, providing essential information for countries wishing to take part in an emerging C market, and bringing us closer to an understanding of the future role of soils in the global C cycle. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Assaying a large number of genetic markers from patients in clinical trials is now possible in order to tailor drugs with respect to efficacy. The statistical methodology for analysing such massive data sets is challenging. The most popular type of statistical analysis is to use a univariate test for each genetic marker, once all the data from a clinical study have been collected. This paper presents a sequential method for conducting an omnibus test for detecting gene-drug interactions across the genome, thus allowing informed decisions at the earliest opportunity and overcoming the multiple testing problems from conducting many univariate tests. We first propose an omnibus test for a fixed sample size. This test is based on combining F-statistics that test for an interaction between treatment and the individual single nucleotide polymorphism (SNP). As SNPs tend to be correlated, we use permutations to calculate a global p-value. We extend our omnibus test to the sequential case. In order to control the type I error rate, we propose a sequential method that uses permutations to obtain the stopping boundaries. The results of a simulation study show that the sequential permutation method is more powerful than alternative sequential methods that control the type I error rate, such as the inverse-normal method. The proposed method is flexible as we do not need to assume a mode of inheritance and can also adjust for confounding factors. An application to real clinical data illustrates that the method is computationally feasible for a large number of SNPs. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
Virus capsids are primed for disassembly, yet capsid integrity is key to generating a protective immune response. Foot-and-mouth disease virus (FMDV) capsids comprise identical pentameric protein subunits held together by tenuous noncovalent interactions and are often unstable. Chemically inactivated or recombinant empty capsids, which could form the basis of future vaccines, are even less stable than live virus. Here we devised a computational method to assess the relative stability of protein-protein interfaces and used it to design improved candidate vaccines for two poorly stable, but globally important, serotypes of FMDV: O and SAT2. We used a restrained molecular dynamics strategy to rank mutations predicted to strengthen the pentamer interfaces and applied the results to produce stabilized capsids. Structural analyses and stability assays confirmed the predictions, and vaccinated animals generated improved neutralizing-antibody responses to stabilized particles compared to parental viruses and wild-type capsids.
Resumo:
Many recent inverse scattering techniques have been designed for single frequency scattered fields in the frequency domain. In practice, however, the data is collected in the time domain. Frequency domain inverse scattering algorithms obviously apply to time-harmonic scattering, or nearly time-harmonic scattering, through application of the Fourier transform. Fourier transform techniques can also be applied to non-time-harmonic scattering from pulses. Our goal here is twofold: first, to establish conditions on the time-dependent waves that provide a correspondence between time domain and frequency domain inverse scattering via Fourier transforms without recourse to the conventional limiting amplitude principle; secondly, we apply the analysis in the first part of this work toward the extension of a particular scattering technique, namely the point source method, to scattering from the requisite pulses. Numerical examples illustrate the method and suggest that reconstructions from admissible pulses deliver superior reconstructions compared to straight averaging of multi-frequency data. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Following a malicious or accidental atmospheric release in an outdoor environment it is essential for first responders to ensure safety by identifying areas where human life may be in danger. For this to happen quickly, reliable information is needed on the source strength and location, and the type of chemical agent released. We present here an inverse modelling technique that estimates the source strength and location of such a release, together with the uncertainty in those estimates, using a limited number of measurements of concentration from a network of chemical sensors considering a single, steady, ground-level source. The technique is evaluated using data from a set of dispersion experiments conducted in a meteorological wind tunnel, where simultaneous measurements of concentration time series were obtained in the plume from a ground-level point-source emission of a passive tracer. In particular, we analyze the response to the number of sensors deployed and their arrangement, and to sampling and model errors. We find that the inverse algorithm can generate acceptable estimates of the source characteristics with as few as four sensors, providing these are well-placed and that the sampling error is controlled. Configurations with at least three sensors in a profile across the plume were found to be superior to other arrangements examined. Analysis of the influence of sampling error due to the use of short averaging times showed that the uncertainty in the source estimates grew as the sampling time decreased. This demonstrated that averaging times greater than about 5min (full scale time) lead to acceptable accuracy.
Resumo:
We present a new method to determine mesospheric electron densities from partially reflected medium frequency radar pulses. The technique uses an optimal estimation inverse method and retrieves both an electron density profile and a gradient electron density profile. As well as accounting for the absorption of the two magnetoionic modes formed by ionospheric birefringence of each radar pulse, the forward model of the retrieval parameterises possible Fresnel scatter of each mode by fine electronic structure, phase changes of each mode due to Faraday rotation and the dependence of the amplitudes of the backscattered modes upon pulse width. Validation results indicate that known profiles can be retrieved and that χ2 tests upon retrieval parameters satisfy validity criteria. Application to measurements shows that retrieved electron density profiles are consistent with accepted ideas about seasonal variability of electron densities and their dependence upon nitric oxide production and transport.
Resumo:
Inverse methods are widely used in various fields of atmospheric science. However, such methods are not commonly used within the boundary-layer community, where robust observations of surface fluxes are a particular concern. We present a new technique for deriving surface sensible heat fluxes from boundary-layer turbulence observations using an inverse method. Doppler lidar observations of vertical velocity variance are combined with two well-known mixed-layer scaling forward models for a convective boundary layer (CBL). The inverse method is validated using large-eddy simulations of a CBL with increasing wind speed. The majority of the estimated heat fluxes agree within error with the proscribed heat flux, across all wind speeds tested. The method is then applied to Doppler lidar data from the Chilbolton Observatory, UK. Heat fluxes are compared with those from a mast-mounted sonic anemometer. Errors in estimated heat fluxes are on average 18 %, an improvement on previous techniques. However, a significant negative bias is observed (on average −63%) that is more pronounced in the morning. Results are improved for the fully-developed CBL later in the day, which suggests that the bias is largely related to the choice of forward model, which is kept deliberately simple for this study. Overall, the inverse method provided reasonable flux estimates for the simple case of a CBL. Results shown here demonstrate that this method has promise in utilizing ground-based remote sensing to derive surface fluxes. Extension of the method is relatively straight-forward, and could include more complex forward models, or other measurements.
Resumo:
We consider the problem of scattering of a time-harmonic acoustic incident plane wave by a sound soft convex polygon. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the computational cost required to achieve a prescribed level of accuracy grows linearly with respect to the frequency of the incident wave. Recently Chandler–Wilde and Langdon proposed a novel Galerkin boundary element method for this problem for which, by incorporating the products of plane wave basis functions with piecewise polynomials supported on a graded mesh into the approximation space, they were able to demonstrate that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency. Here we propose a related collocation method, using the same approximation space, for which we demonstrate via numerical experiments a convergence rate identical to that achieved with the Galerkin scheme, but with a substantially reduced computational cost.
Resumo:
In this paper we consider the problem of time-harmonic acoustic scattering in two dimensions by convex polygons. Standard boundary or finite element methods for acoustic scattering problems have a computational cost that grows at least linearly as a function of the frequency of the incident wave. Here we present a novel Galerkin boundary element method, which uses an approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh, with smaller elements closer to the corners of the polygon. We prove that the best approximation from the approximation space requires a number of degrees of freedom to achieve a prescribed level of accuracy that grows only logarithmically as a function of the frequency. Numerical results demonstrate the same logarithmic dependence on the frequency for the Galerkin method solution. Our boundary element method is a discretization of a well-known second kind combined-layer-potential integral equation. We provide a proof that this equation and its adjoint are well-posed and equivalent to the boundary value problem in a Sobolev space setting for general Lipschitz domains.
Resumo:
We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A ‘range test’ for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533–547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhäuser, Basel, 1986, pp. 93–102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Göttingen, 1999]. In particular, we propose a new version of the Kirsch–Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.