43 resultados para Large-Scale Optimization

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a potential method to assist developers of future bioenergy schemes when selecting from available suppliers of biomass materials. The method aims to allow tacit requirements made on biomass suppliers to be considered at the design stage of new developments. The method used is a combination of the Analytical Hierarchy Process and the Quality Function Deployment methods (AHP-QFD). The output of the method is a ranking and relative weighting of the available suppliers which could be used to improve optimization algorithms such as linear and goal programming. The paper is at a conceptual stage and no results have been obtained. The aim is to use the AHP-QFD method to bridge the gap between treatment of explicit and tacit requirements of bioenergy schemes; allowing decision makers to identify the most successful supply strategy available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The production of recombinant therapeutic proteins is an active area of research in drug development. These bio-therapeutic drugs target nearly 150 disease states and promise to bring better treatments to patients. However, if new bio-therapeutics are to be made more accessible and affordable, improvements in production performance and optimization of processes are necessary. A major challenge lies in controlling the effect of process conditions on production of intact functional proteins. To achieve this, improved tools are needed for bio-processing. For example, implementation of process modeling and high-throughput technologies can be used to achieve quality by design, leading to improvements in productivity. Commercially, the most sought after targets are secreted proteins due to the ease of handling in downstream procedures. This chapter outlines different approaches for production and optimization of secreted proteins in the host Pichia pastoris. © 2012 Springer Science+business Media, LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis patterns of working hours in large-scale grocery retailing in Britain and France are compared. The research is carried out using cross-national comparative methodology, and the analysis is based on information derived from secondary sources and empirical research in large-scale grocery retailing involving employers and trade unions at industry level and case studies at outlet level. The thesis begins by comparing national patterns of working hours in Britain and France over the post-war period. Subsequently, a detailed comparison of working hours in large-scale grocery retailing in Britain and France is carried out through the analysis of secondary sources and empirical data. Emphasis is placed on analyzing part-time working hours. They are contrasted and compared at national level and explained in terms of supply and demand factors. The relationships between the structuring of, and satisfaction with, working hours and factors determining women's integration in the workforce in Britain and France are investigated. Part-time hours are then compared and contrasted in large-scale grocery retailing in the context of the analysis of working hours. The relationship between the structuring of working hours and satisfaction with them is examined in both countries through research with women part-timers in case study outlets. The cross-national comparative methodology is used to examine whether dissimilar national contexts in Britain and France have led to different patterns of working hours in large-scale grocery retailing. The principal conclusion is that significant differences are found in the length, organization and flexibility of working hours and that these differences can be attributed to dissimilar socio-economic, political, and cultural contexts in the two countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains infor­mation relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of con­cept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network ap­proach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the pres­ence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear tech­niques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article investigates the performance of a model called Full-Scale Optimisation, which was presented recently and is used for financial investment advice. The investor’s preferences of expected risk and return are entered into the model, and a recommended portfolio is produced. This model is theoretically more accurate than the mainstream investment advice model, called Mean-Variance Optimization, as there are fewer assumptions made. Our investigation of the model’s performance is broader when it comes to investor preferences, and more general when it comes to investment type, as compared to previous studies. Our investigation shows that Full-Scale Optimisation is more widely applicable than earlier known.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

T-cell activation requires interaction of T-cell receptors (TCR) with peptide epitopes bound by major histocompatibility complex (MHC) proteins. This interaction occurs at a special cell-cell junction known as the immune or immunological synapse. Fluorescence microscopy has shown that the interplay among one agonist peptide-MHC (pMHC), one TCR and one CD4 provides the minimum complexity needed to trigger transient calcium signalling. We describe a computational approach to the study of the immune synapse. Using molecular dynamics simulation, we report here on a study of the smallest viable model, a TCR-pMHC-CD4 complex in a membrane environment. The computed structural and thermodynamic properties are in fair agreement with experiment. A number of biomolecules participate in the formation of the immunological synapse. Multi-scale molecular dynamics simulations may be the best opportunity we have to reach a full understanding of this remarkable supra-macromolecular event at a cell-cell junction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study the localization problem in large-scale Underwater Wireless Sensor Networks (UWSNs). Unlike in the terrestrial positioning, the global positioning system (GPS) can not work efficiently underwater. The limited bandwidth, the severely impaired channel and the cost of underwater equipment all makes the localization problem very challenging. Most current localization schemes are not well suitable for deep underwater environment. We propose a hierarchical localization scheme to address the challenging problems. The new scheme mainly consists of four types of nodes, which are surface buoys, Detachable Elevator Transceivers (DETs), anchor nodes and ordinary nodes. Surface buoy is assumed to be equipped with GPS on the water surface. A DET is attached to a surface buoy and can rise and down to broadcast its position. The anchor nodes can compute their positions based on the position information from the DETs and the measurements of distance to the DETs. The hierarchical localization scheme is scalable, and can be used to make balances on the cost and localization accuracy. Initial simulation results show the advantages of our proposed scheme. © 2009 IEEE.