977 resultados para Evolutionary algorithm (EA)
Resumo:
Studying the pathogenesis of an infectious disease like colibacillosis requires an understanding of the responses of target hosts to the organism both as a pathogen and as a commensal. The mucosal immune system constitutes the primary line of defence against luminal micro-organisms. The immunoglobulin-superfamily-based adaptive immune system evolved in the earliest jawed vertebrates, and the adaptive and innate immune system of humans, mice, pigs and ruminants co-evolved in common ancestors for approximately 300 million years. The divergence occurred only 100 mya and, as a consequence, most of the fundamental immunological mechanisms are very similar. However, since pressure on the immune system comes from rapidly evolving pathogens, immune systems must also evolve rapidly to maintain the ability of the host to survive and reproduce. As a consequence, there are a number of areas of detail where mammalian immune systems have diverged markedly from each other, such that results obtained in one species are not always immediately transferable to another. Thus, animal models of specific diseases need to be selected carefully, and the results interpreted with caution. Selection is made simpler where specific host species like cattle and pigs can be both target species and reservoirs for human disease, as in infections with Escherichia coli.
Resumo:
The Code for Sustainable Homes (the Code) will require new homes in the United Kingdom to be ‘zero carbon’ from 2016. Drawing upon an evolutionary innovation perspective, this paper contributes to a gap in the literature by investigating which low and zero carbon technologies are actually being used by house builders, rather than the prevailing emphasis on the potentiality of these technologies. Using the results from a questionnaire three empirical contributions are made. First, house builders are selecting a narrow range of technologies. Second, these choices are made to minimise the disruption to their standard design and production templates (SDPTs). Finally, the coalescence around a small group of technologies is expected to intensify with solar-based technologies predicted to become more important. This paper challenges the dominant technical rationality in the literature that technical efficiency and cost benefits are the primary drivers for technology selection. These drivers play an important role but one which is mediated by the logic of maintaining the SDPTs of the house builders. This emphasises the need for construction diffusion of innovation theory to be problematized and developed within the context of business and market regimes constrained and reproduced by resilient technological trajectories.
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
Resumo:
A new sparse kernel density estimator is introduced. Our main contribution is to develop a recursive algorithm for the selection of significant kernels one at time using the minimum integrated square error (MISE) criterion for both kernel selection. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
Resumo:
Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick
Resumo:
Enterprise Architecture (EA) has been recognised as an important tool in modern business management for closing the gap between strategy and its execution. The current literature implies that for EA to be successful, it should have clearly defined goals. However, the goals of different stakeholders are found to be different, even contradictory. In our explorative research, we seek an answer to the questions: What kind of goals are set for the EA implementation? How do the goals evolve during the time? Are the goals different among stakeholders? How do they affect the success of EA? We analysed an EA pilot conducted among eleven Finnish Higher Education Institutions (HEIs) in 2011. The goals of the pilot were gathered from three different stages of the pilot: before the pilot, during the pilot, and after the pilot, by means of a project plan, interviews during the pilot and a questionnaire after the pilot. The data was analysed using qualitative and quantitative methods. Eight distinct goals were recognised by the coding: Adopt EA Method, Build Information Systems, Business Development, Improve Reporting, Process Improvement, Quality Assurance, Reduce Complexity, and Understand the Big Picture. The success of the pilot was analysed statistically using the scale 1-5. Results revealed that goals set before the pilot were very different from those mentioned during the pilot, or after the pilot. Goals before the pilot were mostly related to expected benefits from the pilot, whereas the most important result was to adopt the EA method. Results can be explained by possibly different roles of respondents, which in turn were most likely caused by poor communication. Interestingly, goals mentioned by different stakeholders were not limited to their traditional areas of responsibility. For example, in some cases Chief Information Officers' goals were Quality Assurance and Process Improvement, whereas managers’ goals were Build Information Systems and Adopt EA Method. This could be a result of a good understanding of the meaning of EA, or stakeholders do not regard EA as their concern at all. It is also interesting to notice that regardless of the different perceptions of goals among stakeholders, all HEIs felt the pilot to be successful. Thus the research does not provide support to confirm the link between clear goals and success.
Resumo:
Reinforcing the Low Voltage (LV) distribution network will become essential to ensure it remains within its operating constraints as demand on the network increases. The deployment of energy storage in the distribution network provides an alternative to conventional reinforcement. This paper presents a control methodology for energy storage to reduce peak demand in a distribution network based on day-ahead demand forecasts and historical demand data. The control methodology pre-processes the forecast data prior to a planning phase to build in resilience to the inevitable errors between the forecasted and actual demand. The algorithm uses no real time adjustment so has an economical advantage over traditional storage control algorithms. Results show that peak demand on a single phase of a feeder can be reduced even when there are differences between the forecasted and the actual demand. In particular, results are presented that demonstrate when the algorithm is applied to a large number of single phase demand aggregations that it is possible to identify which of these aggregations are the most suitable candidates for the control methodology.
Resumo:
The current state of the art in the planning and coordination of autonomous vehicles is based upon the presence of speed lanes. In a traffic scenario where there is a large diversity between vehicles the removal of speed lanes can generate a significantly higher traffic bandwidth. Vehicle navigation in such unorganized traffic is considered. An evolutionary based trajectory planning technique has the advantages of making driving efficient and safe, however it also has to surpass the hurdle of computational cost. In this paper, we propose a real time genetic algorithm with Bezier curves for trajectory planning. The main contribution is the integration of vehicle following and overtaking behaviour for general traffic as heuristics for the coordination between vehicles. The resultant coordination strategy is fast and near-optimal. As the vehicles move, uncertainties may arise which are constantly adapted to, and may even lead to either the cancellation of an overtaking procedure or the initiation of one. Higher level planning is performed by Dijkstra's algorithm which indicates the route to be followed by the vehicle in a road network. Re-planning is carried out when a road blockage or obstacle is detected. Experimental results confirm the success of the algorithm subject to optimal high and low-level planning, re-planning and overtaking.
Resumo:
Through a close analysis of socio-biologist Sarah Blaffer Hrdy’s work on motherhood and ‘mirror neurons’ it is argued that Hrdy’s claims exemplify how research that ostensibly bases itself on neuroscience, including in literary studies ‘literary Darwinism’, relies after all not on scientific, but on political assumptions, namely on underlying, unquestioned claims about the autonomous, transparent, liberal agent of consumer capitalism. These underpinning assumptions, it is further argued, involve the suppression or overlooking of an alternative, prior tradition of feminist theory, including feminist science criticism.
Resumo:
This paper seeks to chronicle the roots of corporate governance form its narrow shareholder perspective to the current bourgeoning stakeholder approach while giving cognizance to institutional investors and their effective role in ESG in light of the King Report III of South Africa. It is aimed at a critical review of the extant literature from the shareholder Cadbury epoch to the present day King Report novelty. We aim to: (i) offer an analytical state of corporate governance in the Anglo-Saxon world, Middle East and North Africa (MENA), Far East Asia and Africa; and (ii) illuminate the lead role the king Report of South Africa is playing as the bellwether of the stakeholder approach to corporate governance as well as guiding the role of institutional investors in ESG.
Resumo:
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing