903 resultados para Evolutionary Polynomial Regression (EPR) for HydroSystems
Resumo:
In this paper, we present a polynomial-based noise variance estimator for multiple-input multiple-output single-carrier block transmission (MIMO-SCBT) systems. It is shown that the optimal pilots for noise variance estimation satisfy the same condition as that for channel estimation. Theoretical analysis indicates that the proposed estimator is statistically more efficient than the conventional sum of squared residuals (SSR) based estimator. Furthermore, we obtain an efficient implementation of the estimator by exploiting its special structure. Numerical results confirm our theoretical analysis.
Resumo:
The Code for Sustainable Homes (the Code) will require new homes in the United Kingdom to be ‘zero carbon’ from 2016. Drawing upon an evolutionary innovation perspective, this paper contributes to a gap in the literature by investigating which low and zero carbon technologies are actually being used by house builders, rather than the prevailing emphasis on the potentiality of these technologies. Using the results from a questionnaire three empirical contributions are made. First, house builders are selecting a narrow range of technologies. Second, these choices are made to minimise the disruption to their standard design and production templates (SDPTs). Finally, the coalescence around a small group of technologies is expected to intensify with solar-based technologies predicted to become more important. This paper challenges the dominant technical rationality in the literature that technical efficiency and cost benefits are the primary drivers for technology selection. These drivers play an important role but one which is mediated by the logic of maintaining the SDPTs of the house builders. This emphasises the need for construction diffusion of innovation theory to be problematized and developed within the context of business and market regimes constrained and reproduced by resilient technological trajectories.
Resumo:
In the present paper we study the approximation of functions with bounded mixed derivatives by sparse tensor product polynomials in positive order tensor product Sobolev spaces. We introduce a new sparse polynomial approximation operator which exhibits optimal convergence properties in L2 and tensorized View the MathML source simultaneously on a standard k-dimensional cube. In the special case k=2 the suggested approximation operator is also optimal in L2 and tensorized H1 (without essential boundary conditions). This allows to construct an optimal sparse p-version FEM with sparse piecewise continuous polynomial splines, reducing the number of unknowns from O(p2), needed for the full tensor product computation, to View the MathML source, required for the suggested sparse technique, preserving the same optimal convergence rate in terms of p. We apply this result to an elliptic differential equation and an elliptic integral equation with random loading and compute the covariances of the solutions with View the MathML source unknowns. Several numerical examples support the theoretical estimates.
Resumo:
In this article, we illustrate experimentally an important consequence of the stochastic component in choice behaviour which has not been acknowledged so far. Namely, its potential to produce ‘regression to the mean’ (RTM) effects. We employ a novel approach to individual choice under risk, based on repeated multiple-lottery choices (i.e. choices among many lotteries), to show how the high degree of stochastic variability present in individual decisions can distort crucially certain results through RTM effects. We demonstrate the point in the context of a social comparison experiment.
Resumo:
Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick
Resumo:
An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Through a close analysis of socio-biologist Sarah Blaffer Hrdy’s work on motherhood and ‘mirror neurons’ it is argued that Hrdy’s claims exemplify how research that ostensibly bases itself on neuroscience, including in literary studies ‘literary Darwinism’, relies after all not on scientific, but on political assumptions, namely on underlying, unquestioned claims about the autonomous, transparent, liberal agent of consumer capitalism. These underpinning assumptions, it is further argued, involve the suppression or overlooking of an alternative, prior tradition of feminist theory, including feminist science criticism.
Resumo:
This paper seeks to chronicle the roots of corporate governance form its narrow shareholder perspective to the current bourgeoning stakeholder approach while giving cognizance to institutional investors and their effective role in ESG in light of the King Report III of South Africa. It is aimed at a critical review of the extant literature from the shareholder Cadbury epoch to the present day King Report novelty. We aim to: (i) offer an analytical state of corporate governance in the Anglo-Saxon world, Middle East and North Africa (MENA), Far East Asia and Africa; and (ii) illuminate the lead role the king Report of South Africa is playing as the bellwether of the stakeholder approach to corporate governance as well as guiding the role of institutional investors in ESG.
Resumo:
Polymer-drug conjugates have demonstrated clinical potential in the context of anticancer therapy. However, such promising results have, to date, failed to translate into a marketed product. Polymer-drug conjugates rely on two factors for activity: (i) the presence of a defective vasculature, for passive accumulation of this technology into the tumour tissue (enhanced permeability and retention (EPR) effect) and (ii) the presence of a specific trigger at the tumour site, for selective drug release (e.g., the enzyme cathepsin B). Here, we retrospectively analyse literature data to investigate which tumour types have proved more responsive to polymer-drug conjugates and to determine correlations between the magnitude of the EPR effect and/or expression of cathepsin B. Lung, breast and ovarian cancers showed the highest response rate (30%, 47% and 41%, respectively for cathepsin-activated conjugates and 31%, 43%, 40%, across all conjugates). An analysis of literature data on cathepsin content in various tumour types showed that these tumour types had high cathepsin content (up to 3835 ng/mg for lung cancer), although marked heterogeneity was observed across different studies. In addition, these tumour types were also reported as having a high EPR effect. Our results suggest that a pre-screening of patient population could bring a more marked clinical benefit.
Resumo:
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing
Resumo:
A central process in evolution is the recruitment of genes to regulatory networks. We engineered immotile strains of the bacterium Pseudomonas fluorescens that lack flagella due to deletion of the regulatory gene fleQ. Under strong selection for motility, these bacteria consistently regained flagella within 96 hours via a two-step evolutionary pathway. Step 1 mutations increase intracellular levels of phosphorylated NtrC, a distant homologue of FleQ, which begins to commandeer control of the fleQ regulon at the cost of disrupting nitrogen uptake and assimilation. Step 2 is a switch-of-function mutation that redirects NtrC away from nitrogen uptake and towards its novel function as a flagellar regulator. Our results demonstrate that natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
A new class of parameter estimation algorithms is introduced for Gaussian process regression (GPR) models. It is shown that the integration of the GPR model with probability distance measures of (i) the integrated square error and (ii) Kullback–Leibler (K–L) divergence are analytically tractable. An efficient coordinate descent algorithm is proposed to iteratively estimate the kernel width using golden section search which includes a fast gradient descent algorithm as an inner loop to estimate the noise variance. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
Bacteria have evolved complex regulatory networks that enable integration of multiple intracellular and extracellular signals to coordinate responses to environmental changes. However, our knowledge of how regulatory systems function and evolve is still relatively limited. There is often extensive homology between components of different networks, due to past cycles of gene duplication, divergence, and horizontal gene transfer, raising the possibility of cross-talk or redundancy. Consequently, evolutionary resilience is built into gene networks – homology between regulators can potentially allow rapid rescue of lost regulatory function across distant regions of the genome. In our recent study [Taylor, et al. Science (2015), 347(6225)] we find that mutations that facilitate cross-talk between pathways can contribute to gene network evolution, but that such mutations come with severe pleiotropic costs. Arising from this work are a number of questions surrounding how this phenomenon occurs.