232 resultados para Simulated annealing algorithm
Resumo:
The structure and size of the eyes generated in numerically simulated tropical cyclones and polar lows have been studied. A primitive-equation numerical model simulated systems in which the structures of the eyes formed were consistent with available observations. Whilst the tropical cyclone eyes generated were usually rapidly rotating, it appeared impossible for an eye formed in a system with a polar environment to develop this type of structure. The polar low eyes were found to be unable to warm through the subsidence of air with high values of potential temperature, as the environment was approximately statically neutral. Factors affecting the size of the eye were investigated through a series of controlled experiments. In mature tropical cyclone systems the size of the eye was insensitive to small changes in initial conditions, surface friction and latent and sensible heating from the ocean. In contrast, the eye size was strongly dependent on these parameters in the mature polar lows. Consistent with the findings, a mechanism is proposed in which the size of the eye in simulated polar lows is controlled by the strength of subsidence within it.
Observed and simulated precursors of stratospheric polar vortex anomalies in the Northern Hemisphere
Resumo:
The Northern Hemisphere stratospheric polar vortex is linked to surface weather. After Stratospheric Sudden Warmings in winter, the tropospheric circulation is often nudged towards the negative phase of the Northern Annular Mode (NAM) and the North Atlantic Oscillation (NAO). A strong stratospheric vortex is often associated with subsequent positive NAM/NAO conditions. For stratosphere–troposphere associations to be useful for forecasting purposes it is crucial that changes to the stratospheric vortex can be understood and predicted. Recent studies have proposed that there exist tropospheric precursors to anomalous vortex events in the stratosphere and that these precursors may be understood by considering the relationship between stationary wave patterns and regional variability. Another important factor is the extent to which the inherent variability of the stratosphere in an atmospheric model influences its ability to simulate stratosphere–troposphere links. Here we examine the lower stratosphere variability in 300-year pre-industrial control integrations from 13 coupled climate models. We show that robust precursors to stratospheric polar vortex anomalies are evident across the multi-model ensemble. The most significant tropospheric component of these precursors consists of a height anomaly dipole across northern Eurasia and large anomalies in upward stationary wave fluxes in the lower stratosphere over the continent. The strength of the stratospheric variability in the models was found to depend on the variability of the upward stationary wave fluxes and the amplitude of the stationary waves.
Resumo:
The objective of this paper is to revisit the von Liebig hypothesis by reexamining five samples of experimental data and by applying to it recent advances in Bayesian techniques. The samples were published by Hexem and Heady as described in a further section. Prior to outlining the estimation strategy, we discuss the intuition underlying our approach and, briefly, the literature on which it is based. We present an algorithm for the basic von Liebig formulation and demonstrate its application using simulated data (table 1). We then discuss the modifications needed to the basic model that facilitate estimation of a von Liebig frontier and we demonstrate the extended algorithm using simulated data (table 2). We then explore, empirically, the relationships between limiting water and nitrogen in the Hexem and Heady corn samples and compare the results between the two formulations (table 3). Finally, some conclusions and suggestions for further research are offered.
Resumo:
Capturing the pattern of structural change is a relevant task in applied demand analysis, as consumer preferences may vary significantly over time. Filtering and smoothing techniques have recently played an increasingly relevant role. A dynamic Almost Ideal Demand System with random walk parameters is estimated in order to detect modifications in consumer habits and preferences, as well as changes in the behavioural response to prices and income. Systemwise estimation, consistent with the underlying constraints from economic theory, is achieved through the EM algorithm. The proposed model is applied to UK aggregate consumption of alcohol and tobacco, using quarterly data from 1963 to 2003. Increased alcohol consumption is explained by a preference shift, addictive behaviour and a lower price elasticity. The dynamic and time-varying specification is consistent with the theoretical requirements imposed at each sample point. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The rate at which a given site in a gene sequence alignment evolves over time may vary. This phenomenon-known as heterotachy-can bias or distort phylogenetic trees inferred from models of sequence evolution that assume rates of evolution are constant. Here, we describe a phylogenetic mixture model designed to accommodate heterotachy. The method sums the likelihood of the data at each site over more than one set of branch lengths on the same tree topology. A branch-length set that is best for one site may differ from the branch-length set that is best for some other site, thereby allowing different sites to have different rates of change throughout the tree. Because rate variation may not be present in all branches, we use a reversible-jump Markov chain Monte Carlo algorithm to identify those branches in which reliable amounts of heterotachy occur. We implement the method in combination with our 'pattern-heterogeneity' mixture model, applying it to simulated data and five published datasets. We find that complex evolutionary signals of heterotachy are routinely present over and above variation in the rate or pattern of evolution across sites, that the reversible-jump method requires far fewer parameters than conventional mixture models to describe it, and serves to identify the regions of the tree in which heterotachy is most pronounced. The reversible-jump procedure also removes the need for a posteriori tests of 'significance' such as the Akaike or Bayesian information criterion tests, or Bayes factors. Heterotachy has important consequences for the correct reconstruction of phylogenies as well as for tests of hypotheses that rely on accurate branch-length information. These include molecular clocks, analyses of tempo and mode of evolution, comparative studies and ancestral state reconstruction. The model is available from the authors' website, and can be used for the analysis of both nucleotide and morphological data.
Resumo:
We have developed a novel Hill-climbing genetic algorithm (GA) for simulation of protein folding. The program (written in C) builds a set of Cartesian points to represent an unfolded polypeptide's backbone. The dihedral angles determining the chain's configuration are stored in an array of chromosome structures that is copied and then mutated. The fitness of the mutated chain's configuration is determined by its radius of gyration. A four-helix bundle was used to optimise simulation conditions, and the program was compared with other, larger, genetic algorithms on a variety of structures. The program ran 50% faster than other GA programs. Overall, tests on 100 non-redundant structures gave comparable results to other genetic algorithms, with the Hill-climbing program running from between 20 and 50% faster. Examples including crambin, cytochrome c, cytochrome B and hemerythrin gave good secondary structure fits with overall alpha carbon atom rms deviations of between 5 and 5.6 Angstrom with an optimised hydrophobic term in the fitness function. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.
Resumo:
Pulsed Phase Thermography (PPT) has been proven effective on depth retrieval of flat-bottomed holes in different materials such as plastics and aluminum. In PPT, amplitude and phase delay signatures are available following data acquisition (carried out in a similar way as in classical Pulsed Thermography), by applying a transformation algorithm such as the Fourier Transform (FT) on thermal profiles. The authors have recently presented an extended review on PPT theory, including a new inversion technique for depth retrieval by correlating the depth with the blind frequency fb (frequency at which a defect produce enough phase contrast to be detected). An automatic defect depth retrieval algorithm had also been proposed, evidencing PPT capabilities as a practical inversion technique. In addition, the use of normalized parameters to account for defect size variation as well as depth retrieval from complex shape composites (GFRP and CFRP) are currently under investigation. In this paper, steel plates containing flat-bottomed holes at different depths (from 1 to 4.5 mm) are tested by quantitative PPT. Least squares regression results show excellent agreement between depth and the inverse square root blind frequency, which can be used for depth inversion. Experimental results on steel plates with simulated corrosion are presented as well. It is worth noting that results are improved by performing PPT on reconstructed (synthetic) rather than on raw thermal data.
Resumo:
Free phenolic acids were extracted from a laboratory-produced sample of green malt. Aliquots of the phenolic acid extract were heated from 25 to 110°C over 27 h, representative of a commercial kilning regime. Samples were taken at regular intervals throughout heating and were assessed for changes in antioxidant activity by both the 2,2(prime)-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) radical-cation scavenging (ABTS(^•+)) and the ferric-reducing antioxidant potential (FRAP) assays. Changes in the profile of the phenolic acids of the extracts were determined by HPLC. Overall, there was a decrease in both antioxidant activity level and the level of phenolic acids, but as the temperature increased from 80 to 100°C, there was an increase in both the antioxidant activity level and the level of detected phenolic acids.
Resumo:
The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.
Resumo:
This paper represents the first step in an on-going work for designing an unsupervised method based on genetic algorithm for intrusion detection. Its main role in a broader system is to notify of an unusual traffic and in that way provide the possibility of detecting unknown attacks. Most of the machine-learning techniques deployed for intrusion detection are supervised as these techniques are generally more accurate, but this implies the need of labeling the data for training and testing which is time-consuming and error-prone. Hence, our goal is to devise an anomaly detector which would be unsupervised, but at the same time robust and accurate. Genetic algorithms are robust and able to avoid getting stuck in local optima, unlike the rest of clustering techniques. The model is verified on KDD99 benchmark dataset, generating a solution competitive with the solutions of the state-of-the-art which demonstrates high possibilities of the proposed method.
Resumo:
This paper proposes a new iterative algorithm for OFDM joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the problem of "overfitting" such that the iterative approach may converge to a trivial solution. Although it is essential for this joint approach, the overfitting problem was relatively less studied in existing algorithms. In this paper, specifically, we apply a hard decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the phase noise, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical simulations are also given to verify the proposed algorithm.
Resumo:
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward constrained regression manner. The leave-one-out (LOO) test score is used for kernel selection. The jackknife parameter estimator subject to positivity constraint check is used for the parameter estimation of a single parameter at each forward step. As such the proposed approach is simple to implement and the associated computational cost is very low. An illustrative example is employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to that of the classical Parzen window estimate.