13 resultados para Conjugate gradient methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem. We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.
Resumo:
We consider conjugate-gradient like methods for solving block symmetric indefinite linear systems that arise from saddle-point problems or, in particular, regularizations thereof. Such methods require preconditioners that preserve certain sub-blocks from the original systems but allow considerable flexibility for the remaining blocks. We construct a number of families of implicit factorizations that are capable of reproducing the required sub-blocks and (some) of the remainder. These generalize known implicit factorizations for the unregularized case. Improved eigenvalue clustering is possible if additionally some of the noncrucial blocks are reproduced. Numerical experiments confirm that these implicit-factorization preconditioners can be very effective in practice.
Resumo:
We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Results concerning the eigenvalues of the preconditioned matrix and its minimum polynomial are given. Numerical experiments validate these conclusions.
Resumo:
Quasi-Newton-Raphson minimization and conjugate gradient minimization have been used to solve the crystal structures of famotidine form B and capsaicin from X-ray powder diffraction data and characterize the chi(2) agreement surfaces. One million quasi-Newton-Raphson minimizations found the famotidine global minimum with a frequency of ca 1 in 5000 and the capsaicin global minimum with a frequency of ca 1 in 10 000. These results, which are corroborated by conjugate gradient minimization, demonstrate the existence of numerous pathways from some of the highest points on these chi(2) agreement surfaces to the respective global minima, which are passable using only downhill moves. This important observation has significant ramifications for the development of improved structure determination algorithms.
Resumo:
The speed of convergence while training is an important consideration in the use of neural nets. The authors outline a new training algorithm which reduces both the number of iterations and training time required for convergence of multilayer perceptrons, compared to standard back-propagation and conjugate gradient descent algorithms.
Resumo:
We derive energy-norm a posteriori error bounds, using gradient recovery (ZZ) estimators to control the spatial error, for fully discrete schemes for the linear heat equation. This appears to be the �rst completely rigorous derivation of ZZ estimators for fully discrete schemes for evolution problems, without any restrictive assumption on the timestep size. An essential tool for the analysis is the elliptic reconstruction technique.Our theoretical results are backed with extensive numerical experimentation aimed at (a) testing the practical sharpness and asymptotic behaviour of the error estimator against the error, and (b) deriving an adaptive method based on our estimators. An extra novelty provided is an implementation of a coarsening error "preindicator", with a complete implementation guide in ALBERTA in the appendix.
Resumo:
With the rapid development of proteomics, a number of different methods appeared for the basic task of protein identification. We made a simple comparison between a common liquid chromatography-tandem mass spectrometry (LC-MS/MS) workflow using an ion trap mass spectrometer and a combined LC-MS and LC-MS/MS method using Fourier transform ion cyclotron resonance (FTICR) mass spectrometry and accurate peptide masses. To compare the two methods for protein identification, we grew and extracted proteins from E. coli using established protocols. Cystines were reduced and alkylated, and proteins digested by trypsin. The resulting peptide mixtures were separated by reversed-phase liquid chromatography using a 4 h gradient from 0 to 50% acetonitrile over a C18 reversed-phase column. The LC separation was coupled on-line to either a Bruker Esquire HCT ion trap or a Bruker 7 tesla APEX-Qe Qh-FTICR hybrid mass spectrometer. Data-dependent Qh-FTICR-MS/MS spectra were acquired using the quadrupole mass filter and collisionally induced dissociation into the external hexapole trap. Proteins were in both schemes identified by Mascot MS/MS ion searches and the peptides identified from these proteins in the FTICR MS/MS data were used for automatic internal calibration of the FTICR-MS data, together with ambient polydimethylcyclosiloxane ions.
Resumo:
Increasingly, the microbiological scientific community is relying on molecular biology to define the complexity of the gut flora and to distinguish one organism from the next. This is particularly pertinent in the field of probiotics, and probiotic therapy, where identifying probiotics from the commensal flora is often warranted. Current techniques, including genetic fingerprinting, gene sequencing, oligonucleotide probes and specific primer selection, discriminate closely related bacteria with varying degrees of success. Additional molecular methods being employed to determine the constituents of complex microbiota in this area of research are community analysis, denaturing gradient gel electrophoresis (DGGE)/temperature gradient gel electrophoresis (TGGE), fluorescent in situ hybridisation (FISH) and probe grids. Certain approaches enable specific aetiological agents to be monitored, whereas others allow the effects of dietary intervention on bacterial populations to be studied. Other approaches demonstrate diversity, but may not always enable quantification of the population. At the heart of current molecular methods is sequence information gathered from culturable organisms. However, the diversity and novelty identified when applying these methods to the gut microflora demonstrates how little is known about this ecosystem. Of greater concern is the inherent bias associated with some molecular methods. As we understand more of the complexity and dynamics of this diverse microbiota we will be in a position to develop more robust molecular-based technologies to examine it. In addition to identification of the microbiota and discrimination of probiotic strains from commensal organisms, the future of molecular biology in the field of probiotics and the gut flora will, no doubt, stretch to investigations of functionality and activity of the microflora, and/or specific fractions. The quest will be to demonstrate the roles of probiotic strains in vivo and not simply their presence or absence.
Resumo:
This paper discusses how numerical gradient estimation methods may be used in order to reduce the computational demands on a class of multidimensional clustering algorithms. The study is motivated by the recognition that several current point-density based cluster identification algorithms could benefit from a reduction of computational demand if approximate a-priori estimates of the cluster centres present in a given data set could be supplied as starting conditions for these algorithms. In this particular presentation, the algorithm shown to benefit from the technique is the Mean-Tracking (M-T) cluster algorithm, but the results obtained from the gradient estimation approach may also be applied to other clustering algorithms and their related disciplines.
Resumo:
Summary 1. In recent decades there have been population declines of many UK bird species, which have become the focus of intense research and debate. Recently, as the populations of potential predators have increased there is concern that increased rates of predation may be contributing to the declines. In this review, we assess the methodologies behind the current published science on the impacts of predators on avian prey in the UK. 2. We identified suitable studies, classified these according to study design (experimental ⁄observational) and assessed the quantity and quality of the data upon which any variation in predation rates was inferred. We then explored whether the underlying study methodology had implications for study outcome. 3. We reviewed 32 published studies and found that typically observational studies comprehensively monitored significantly fewer predator species than experimental studies. Data for a difference in predator abundance from targeted (i.e. bespoke) census techniques were available for less than half of the 32 predator species studied. 4. The probability of a study detecting an impact on prey abundance was strongly, positively related to the quality and quantity of data upon which the gradient in predation rates was inferred. 5. The findings suggest that if a study is based on good quality abundance data for a range of predator species then it is more likely to detect an effect than if it relies on opportunistic data for a smaller number of predators. 6. We recommend that the findings from studies which use opportunistic data, for a limited number of predator species, should be treated with caution and that future studies employ bespoke census techniques to monitor predator abundance for an appropriate suite of predators.
Resumo:
As part of an international intercomparison project, a set of single column models (SCMs) and cloud-resolving models (CRMs) are run under the weak temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistent implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.
Resumo:
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.