24 resultados para estimation of distribution algorithms
em Aston University Research Archive
Resumo:
Technology changes rapidly over years providing continuously more options for computer alternatives and making life easier for economic, intra-relation or any other transactions. However, the introduction of new technology “pushes” old Information and Communication Technology (ICT) products to non-use. E-waste is defined as the quantities of ICT products which are not in use and is bivariate function of the sold quantities, and the probability that specific computers quantity will be regarded as obsolete. In this paper, an e-waste generation model is presented, which is applied to the following regions: Western and Eastern Europe, Asia/Pacific, Japan/Australia/New Zealand, North and South America. Furthermore, cumulative computer sales were retrieved for selected countries of the regions so as to compute obsolete computer quantities. In order to provide robust results for the forecasted quantities, a selection of forecasting models, namely (i) Bass, (ii) Gompertz, (iii) Logistic, (iv) Trend model, (v) Level model, (vi) AutoRegressive Moving Average (ARMA), and (vii) Exponential Smoothing were applied, depicting for each country that model which would provide better results in terms of minimum error indices (Mean Absolute Error and Mean Square Error) for the in-sample estimation. As new technology does not diffuse in all the regions of the world with the same speed due to different socio-economic factors, the lifespan distribution, which provides the probability of a certain quantity of computers to be considered as obsolete, is not adequately modeled in the literature. The time horizon for the forecasted quantities is 2014-2030, while the results show a very sharp increase in the USA and United Kingdom, due to the fact of decreasing computer lifespan and increasing sales.
Resumo:
Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.
Resumo:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
Resumo:
An increasing number of neuroimaging studies are concerned with the identification of interactions or statistical dependencies between brain areas. Dependencies between the activities of different brain regions can be quantified with functional connectivity measures such as the cross-correlation coefficient. An important factor limiting the accuracy of such measures is the amount of empirical data available. For event-related protocols, the amount of data also affects the temporal resolution of the analysis. We use analytical expressions to calculate the amount of empirical data needed to establish whether a certain level of dependency is significant when the time series are autocorrelated, as is the case for biological signals. These analytical results are then contrasted with estimates from simulations based on real data recorded with magnetoencephalography during a resting-state paradigm and during the presentation of visual stimuli. Results indicate that, for broadband signals, 50-100 s of data is required to detect a true underlying cross-correlations coefficient of 0.05. This corresponds to a resolution of a few hundred milliseconds for typical event-related recordings. The required time window increases for narrow band signals as frequency decreases. For instance, approximately 3 times as much data is necessary for signals in the alpha band. Important implications can be derived for the design and interpretation of experiments to characterize weak interactions, which are potentially important for brain processing.
Resumo:
It is becoming clear that the detection and integration of synaptic input and its conversion into an output signal in cortical neurons are strongly influenced by background synaptic activity or "noise." The majority of this noise results from the spontaneous release of synaptic transmitters, interacting with ligand-gated ion channels in the postsynaptic neuron [Berretta N, Jones RSG (1996); A comparison of spontaneous synaptic EPSCs in layer V and layer II neurones in the rat entorhinal cortex in vitro. J Neurophysiol 76:1089-1110; Jones RSG, Woodhall GL (2005) Background synaptic activity in rat entorhinal cortical neurons: differential control of transmitter release by presynaptic receptors. J Physiol 562:107-120; LoTurco JJ, Mody I, Kriegstein AR (1990) Differential activation of glutamate receptors by spontaneously released transmitter in slices of neocortex. Neurosci Lett 114:265-271; Otis TS, Staley KJ, Mody I (1991) Perpetual inhibitory activity in mammalian brain slices generated by spontaneous GABA release. Brain Res 545:142-150; Ropert N, Miles R, Korn H (1990) Characteristics of miniature inhibitory postsynaptic currents in CA1 pyramidal neurones of rat hippocampus. J Physiol 428:707-722; Salin PA, Prince DA (1996) Spontaneous GABAA receptor-mediated inhibitory currents in adult rat somatosensory cortex. J Neurophysiol 75:1573-1588; Staley KJ (1999) Quantal GABA release: noise or not? Nat Neurosci 2:494-495; Woodhall GL, Bailey SJ, Thompson SE, Evans DIP, Stacey AE, Jones RSG (2005) Fundamental differences in spontaneous synaptic inhibition between deep and superficial layers of the rat entorhinal cortex. Hippocampus 15:232-245]. The function of synaptic noise has been the subject of debate for some years, but there is increasing evidence that it modifies or controls neuronal excitability and, thus, the integrative properties of cortical neurons. In the present study we have investigated a novel approach [Rudolph M, Piwkowska Z, Badoual M, Bal T, Destexhe A (2004) A method to estimate synaptic conductances from membrane potential fluctuations. J Neurophysiol 91:2884-2896] to simultaneously quantify synaptic inhibitory and excitatory synaptic noise, together with postsynaptic excitability, in rat entorhinal cortical neurons in vitro. The results suggest that this is a viable and useful approach to the study of the function of synaptic noise in cortical networks. © 2007 IBRO.
Resumo:
A number of papers and reports covering the techno-economic analysis of bio-oil production has been published. These have had different scopes, use different feedstocks and reflected national cost structures. This paper reviews and compares their cost estimates and the experimental results that underpin them. A comprehensive cost and performance model was produced based on consensus data from the previous studies or stated scenarios where data is not available that reflected UK costs. The model takes account sales of bio-char that is a co-product of pyrolysis and the electricity consumption of the pyrolysis plant and biomass pre-processing plants. It was concluded that it should be able to produce bio-oil in the UK from energy crops for a similar cost as distillate fuel oil. It was also found that there was little difference in the processing cost for woodchips and baled miscanthus. © 2011 Elsevier Ltd.
Resumo:
This thesis presents an analysis of the stability of complex distribution networks. We present a stability analysis against cascading failures. We propose a spin [binary] model, based on concepts of statistical mechanics. We test macroscopic properties of distribution networks with respect to various topological structures and distributions of microparameters. The equilibrium properties of the systems are obtained in a statistical mechanics framework by application of the replica method. We demonstrate the validity of our approach by comparing it with Monte Carlo simulations. We analyse the network properties in terms of phase diagrams and found both qualitative and quantitative dependence of the network properties on the network structure and macroparameters. The structure of the phase diagrams points at the existence of phase transition and the presence of stable and metastable states in the system. We also present an analysis of robustness against overloading in the distribution networks. We propose a model that describes a distribution process in a network. The model incorporates the currents between any connected hubs in the network, local constraints in the form of Kirchoff's law and a global optimizational criterion. The flow of currents in the system is driven by the consumption. We study two principal types of model: infinite and finite link capacity. The key properties are the distributions of currents in the system. We again use a statistical mechanics framework to describe the currents in the system in terms of macroscopic parameters. In order to obtain observable properties we apply the replica method. We are able to assess the criticality of the level of demand with respect to the available resources and the architecture of the network. Furthermore, the parts of the system, where critical currents may emerge, can be identified. This, in turn, provides us with the characteristic description of the spread of the overloading in the systems.
Resumo:
This article uses a semiparametric smooth coefficient model (SPSCM) to estimate TFP growth and its components (scale and technical change). The SPSCM is derived from a nonparametric specification of the production technology represented by an input distance function (IDF), using a growth formulation. The functional coefficients of the SPSCM come naturally from the model and are fully flexible in the sense that no functional form of the underlying production technology is used to derive them. Another advantage of the SPSCM is that it can estimate bias (input and scale) in technical change in a fully flexible manner. We also used a translog IDF framework to estimate TFP growth components. A panel of U.S. electricity generating plants for the period 1986–1998 is used for this purpose. Comparing estimated TFP growth results from both parametric and semiparametric models against the Divisia TFP growth, we conclude that the SPSCM performs the best in tracking the temporal behavior of TFP growth.
Estimation of productivity in Korean electric power plants:a semiparametric smooth coefficient model
Resumo:
This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to compare performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implication for generation policy in Korea as outlined in this study.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT