880 resultados para Fuzzy Multi-Objective Linear Programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the two-machine flow shop problem with an uncapacitated interstage transporter. The jobs have to be split into batches, and upon completion on the first machine, each batch has to be shipped to the second machine by a transporter. The best known heuristic for the problem is a –approximation algorithm that outputs a two-shipment schedule. We design a –approximation algorithm that finds schedules with at most three shipments, and this ratio cannot be improved, unless schedules with more shipments are created. This improvement is achieved due to a thorough analysis of schedules with two and three shipments by means of linear programming. We formulate problems of finding an optimal schedule with two or three shipments as integer linear programs and develop strongly polynomial algorithms that find solutions to their continuous relaxations with a small number of fractional variables

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the two-machine flow shop problem with an uncapacitated interstage transporter. The jobs have to be split into batches, and upon completion on the first machine, each batch has to be shipped to the second machine by a transporter. The best known heuristic for the problem is a –approximation algorithm that outputs a two-shipment schedule. We design a –approximation algorithm that finds schedules with at most three shipments, and this ratio cannot be improved, unless schedules with more shipments are created. This improvement is achieved due to a thorough analysis of schedules with two and three shipments by means of linear programming. We formulate problems of finding an optimal schedule with two or three shipments as integer linear programs and develop strongly polynomial algorithms that find solutions to their continuous relaxations with a small number of fractional variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A standard problem within universities is that of teaching space allocation which can be thought of as the assignment of rooms and times to various teaching activities. The focus is usually on courses that are expected to fit into one room. However, it can also happen that the course will need to be broken up, or ‘split’, into multiple sections. A lecture might be too large to fit into any one room. Another common example is that of seminars or tutorials. Although hundreds of students may be enrolled on a course, it is often subdivided into particular types and sizes of events dependent on the pedagogic requirements of that particular course. Typically, decisions as to how to split courses need to be made within the context of limited space requirements. Institutions do not have an unlimited number of teaching rooms, and need to effectively use those that they do have. The efficiency of space usage is usually measured by the overall ‘utilisation’ which is basically the fraction of the available seat-hours that are actually used. A multi-objective optimisation problem naturally arises; with a trade-off between satisfying preferences on splitting, a desire to increase utilisation, and also to satisfy other constraints such as those based on event location and timetabling conflicts. In this paper, we explore such trade-offs. The explorations themselves are based on a local search method that attempts to optimise the space utilisation by means of a ‘dynamic splitting’ strategy. The local moves are designed to improve utilisation and satisfy the other constraints, but are also allowed to split, and un-split, courses so as to simultaneously meet the splitting objectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Incidence calculus is a mechanism for probabilistic reasoning in which sets of possible worlds, called incidences, are associated with axioms, and probabilities are then associated with these sets. Inference rules are used to deduce bounds on the incidence of formulae which are not axioms, and bounds for the probability of such a formula can then be obtained. In practice an assignment of probabilities directly to axioms may be given, and it is then necessary to find an assignment of incidence which will reproduce these probabilities. We show that this task of assigning incidences can be viewed as a tree searching problem, and two techniques for performing this research are discussed. One of these is a new proposal involving a depth first search, while the other incorporates a random element. A Prolog implementation of these methods has been developed. The two approaches are compared for efficiency and the significance of their results are discussed. Finally we discuss a new proposal for applying techniques from linear programming to incidence calculus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past ten years, a variety of microRNA target prediction methods has been developed, and many of the methods are constantly improved and adapted to recent insights into miRNA-mRNA interactions. In a typical scenario, different methods return different rankings of putative targets, even if the ranking is reduced to selected mRNAs that are related to a specific disease or cell type. For the experimental validation it is then difficult to decide in which order to process the predicted miRNA-mRNA bindings, since each validation is a laborious task and therefore only a limited number of mRNAs can be analysed. We propose a new ranking scheme that combines ranked predictions from several methods and - unlike standard thresholding methods - utilises the concept of Pareto fronts as defined in multi-objective optimisation. In the present study, we attempt a proof of concept by applying the new ranking scheme to hsa-miR-21, hsa-miR-125b, and hsa-miR-373 and prediction scores supplied by PITA and RNAhybrid. The scores are interpreted as a two-objective optimisation problem, and the elements of the Pareto front are ranked by the STarMir score with a subsequent re-calculation of the Pareto front after removal of the top-ranked mRNA from the basic set of prediction scores. The method is evaluated on validated targets of the three miRNA, and the ranking is compared to scores from DIANA-microT and TargetScan. We observed that the new ranking method performs well and consistent, and the first validated targets are elements of Pareto fronts at a relatively early stage of the recurrent procedure. which encourages further research towards a higher-dimensional analysis of Pareto fronts. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Call control features (e.g., call-divert, voice-mail) are primitive options to which users can subscribe off-line to personalise their service. The configuration of a feature subscription involves choosing and sequencing features from a catalogue and is subject to constraints that prevent undesirable feature interactions at run-time. When the subscription requested by a user is inconsistent, one problem is to find an optimal relaxation, which is a generalisation of the feedback vertex set problem on directed graphs, and thus it is an NP-hard task. We present several constraint programming formulations of the problem. We also present formulations using partial weighted maximum Boolean satisfiability and mixed integer linear programming. We study all these formulations by experimentally comparing them on a variety of randomly generated instances of the feature subscription problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical modelling has become an essential tool in the design of modern catalytic systems. Emissions legislation is becoming increasingly stringent, and so mathematical models of aftertreatment systems must become more accurate in order to provide confidence that a catalyst will convert pollutants over the required range of conditions. 
Automotive catalytic converter models contain several sub-models that represent processes such as mass and heat transfer, and the rates at which the reactions proceed on the surface of the precious metal. Of these sub-models, the prediction of the surface reaction rates is by far the most challenging due to the complexity of the reaction system and the large number of gas species involved. The reaction rate sub-model uses global reaction kinetics to describe the surface reaction rate of the gas species and is based on the Langmuir Hinshelwood equation further developed by Voltz et al. [1] The reactions can be modelled using the pre-exponential and activation energies of the Arrhenius equations and the inhibition terms. 
The reaction kinetic parameters of aftertreatment models are found from experimental data, where a measured light-off curve is compared against a predicted curve produced by a mathematical model. The kinetic parameters are usually manually tuned to minimize the error between the measured and predicted data. This process is most commonly long, laborious and prone to misinterpretation due to the large number of parameters and the risk of multiple sets of parameters giving acceptable fits. Moreover, the number of coefficients increases greatly with the number of reactions. Therefore, with the growing number of reactions, the task of manually tuning the coefficients is becoming increasingly challenging. 
In the presented work, the authors have developed and implemented a multi-objective genetic algorithm to automatically optimize reaction parameters in AxiSuite®, [2] a commercial aftertreatment model. The genetic algorithm was developed and expanded from the code presented by Michalewicz et al. [3] and was linked to AxiSuite using the Simulink add-on for Matlab. 
The default kinetic values stored within the AxiSuite model were used to generate a series of light-off curves under rich conditions for a number of gas species, including CO, NO, C3H8 and C3H6. These light-off curves were used to generate an objective function. 
This objective function was used to generate a measure of fit for the kinetic parameters. The multi-objective genetic algorithm was subsequently used to search between specified limits to attempt to match the objective function. In total the pre-exponential factors and activation energies of ten reactions were simultaneously optimized. 
The results reported here demonstrate that, given accurate experimental data, the optimization algorithm is successful and robust in defining the correct kinetic parameters of a global kinetic model describing aftertreatment processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a novel finite impulse response (FIR) filter design methodology that reduces the number of operations with a motivation to reduce power consumption and enhance performance. The novelty of our approach lies in the generation of filter coefficients such that they conform to a given low-power architecture, while meeting the given filter specifications. The proposed algorithm is formulated as a mixed integer linear programming problem that minimizes chebychev error and synthesizes coefficients which consist of pre-specified alphabets. The new modified coefficients can be used for low-power VLSI implementation of vector scaling operations such as FIR filtering using computation sharing multiplier (CSHM). Simulations in 0.25um technology show that CSHM FIR filter architecture can result in 55% power and 34% speed improvement compared to carry save multiplier (CSAM) based filters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of appropriate Electric Vehicle (EV) charging strategies has been identified as an effective way to accommodate an increasing number of EVs on Low Voltage (LV) distribution networks. Most research studies to date assume that future charging facilities will be capable of regulating charge rates continuously, while very few papers consider the more realistic situation of EV chargers that support only on-off charging functionality. In this work, a distributed charging algorithm applicable to on-off based charging systems is presented. Then, a modified version of the algorithm is proposed to incorporate real power system constraints. Both algorithms are compared with uncontrolled and centralized charging strategies from the perspective of both utilities and customers. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents novel algorithms for learning Bayesian networks of bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional internal combustion engine vehicles are a major contributor to global greenhouse gas emissions and other air pollutants, such as particulate matter and nitrogen oxides. If the tail pipe point emissions could be managed centrally without reducing the commercial and personal user functionalities, then one of the most attractive solutions for achieving a significant reduction of emissions in the transport sector would be the mass deployment of electric vehicles. Though electric vehicle sales are still hindered by battery performance, cost and a few other technological bottlenecks, focused commercialisation and support from government policies are encouraging large scale electric vehicle adoptions. The mass proliferation of plug-in electric vehicles is likely to bring a significant additional electric load onto the grid creating a highly complex operational problem for power system operators. Electric vehicle batteries also have the ability to act as energy storage points on the distribution system. This double charge and storage impact of many uncontrollable small kW loads, as consumers will want maximum flexibility, on a distribution system which was originally not designed for such operations has the potential to be detrimental to grid balancing. Intelligent scheduling methods if established correctly could smoothly integrate electric vehicles onto the grid. Intelligent scheduling methods will help to avoid cycling of large combustion plants, using expensive fossil fuel peaking plant, match renewable generation to electric vehicle charging and not overload the distribution system causing a reduction in power quality. In this paper, a state-of-the-art review of scheduling methods to integrate plug-in electric vehicles are reviewed, examined and categorised based on their computational techniques. Thus, in addition to various existing approaches covering analytical scheduling, conventional optimisation methods (e.g. linear, non-linear mixed integer programming and dynamic programming), and game theory, meta-heuristic algorithms including genetic algorithm and particle swarm optimisation, are all comprehensively surveyed, offering a systematic reference for grid scheduling considering intelligent electric vehicle integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A methodology is presented that combines a multi-objective evolutionary algorithm and artificial neural networks to optimise single-storey steel commercial buildings for net-zero carbon impact. Both symmetric and asymmetric geometries are considered in conjunction with regulated, unregulated and embodied carbon. Offsetting is achieved through photovoltaic (PV) panels integrated into the roof. Asymmetric geometries can increase the south facing surface area and consequently allow for improved PV energy production. An exemplar carbon and energy breakdown of a retail unit located in Belfast UK with a south facing PV roof is considered. It was found in most cases that regulated energy offsetting can be achieved with symmetric geometries. However, asymmetric geometries were necessary to account for the unregulated and embodied carbon. For buildings where the volume is large due to high eaves, carbon offsetting became increasingly more difficult, and not possible in certain cases. The use of asymmetric geometries was found to allow for lower embodied energy structures with similar carbon performance to symmetrical structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background
It has been argued that though correlated with mental health, mental well-being is a distinct entity. Despite the wealth of literature on mental health, less is known about mental well-being. Mental health is something experienced by individuals, whereas mental well-being can be assessed at the population level. Accordingly it is important to differentiate the individual and population level factors (environmental and social) that could be associated with mental health and well-being, and as people living in deprived areas have a higher prevalence of poor mental health, these relationships should be compared across different levels of neighbourhood deprivation.

Methods
A cross-sectional representative random sample of 1,209 adults from 62 Super Output Areas (SOAs) in Belfast, Northern Ireland (Feb 2010 – Jan 2011) were recruited in the PARC Study. Interview-administered questionnaires recorded data on socio-demographic characteristics, health-related behaviours, individual social capital, self-rated health, mental health (SF-8) and mental well-being (WEMWBS). Multi-variable linear regression analyses, with inclusion of clustering by SOAs, were used to explore the associations between individual and perceived community characteristics and mental health and mental well-being, and to investigate how these associations differed by the level of neighbourhood deprivation.

Results
Thirty-eight and 30 % of variability in the measures of mental well-being and mental health, respectively, could be explained by individual factors and the perceived community characteristics. In the total sample and stratified by neighbourhood deprivation, age, marital status and self-rated health were associated with both mental health and well-being, with the ‘social connections’ and local area satisfaction elements of social capital also emerging as explanatory variables. An increase of +1 in EQ-5D-3 L was associated with +1SD of the population mean in both mental health and well-being. Similarly, a change from ‘very dissatisfied’ to ‘very satisfied’ for local area satisfaction would result in +8.75 for mental well-being, but only in the more affluent of areas.

Conclusions
Self-rated health was associated with both mental health and mental well-being. Of the individual social capital explanatory variables, ‘social connections’ was more important for mental well-being. Although similarities in the explanatory variables of mental health and mental well-being exist, socio-ecological interventions designed to improve them may not have equivalent impacts in rich and poor neighbourhoods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently there has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and architectural complexity). Once one has learned a model based on their devised method, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Unfortunately, the standard tests used for this purpose are not able to jointly consider performance measures. The aim of this paper is to resolve this issue by developing statistical procedures that are able to account for multiple competing measures at the same time. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameter of such models, as usually the number of studied cases is very reduced in such comparisons. Real data from a comparison among general purpose classifiers is used to show a practical application of our tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O desenvolvimento das redes de estradas rurais, especialmente em áreas montanhosas, é a intervenção chave para melhorar a acessibilidade às localidades e aos serviços públicos, cobrindo o maior número de localidades e de serviços públicos, otimizando os escassos recursos disponíveis em países em desenvolvimento. Este estudo explora diferentes modelos de organização de redes de estradas rurais considerando a construção de novas ligações ou o melhoramento de estradas existentes. Um método, baseado na cobertura da rede de estradas rurais, é utilizado para identificar os pontos nodais que formam a rede rural base numa específica região, a qual cobrirá um conjunto dos serviços públicos e de localidades. O modelo assenta numa rede rural de estradas típica ("backbone" e "branch") das regiões montanhosas do Nepal. Os modelos propostos fornecem um conjunto de possibilidades de ligações a estabelecer ou a melhorar e oferece soluções para diferentes níveis de orçamento, que otimizam os custos de transporte na rede, considerando diferentes tipos de pavimento (em solo, granular ou asfáltico). Foi realizado separadamente um modelo dedicado a análises multi-objetivo para resolver problemas de melhoramento de ligações dentro da rede considerando dois objectivos, minimizar os custos de operação para o utilizador e maximizar a população coberta pela rede de estradas, considerando ligações pavimentadas e não pavimentadas (em solo, granular ou asfáltico) dentro de um determinado limite orçamental. O modelo dá ao decisor (DM) diferentes alternativas eficientes para que este possa tomar uma decisão final. Estes modelos, desenvolvidos para redes de estradas rurais, são também aplicáveis a outras redes de infraestruturas rurais, tais como, de fornecimento de água, de eletricidade e de telecomunicações. A implementação dos modelos nas redes de estradas rurais dos distritos de Gorkha e Lamjung do Nepal permitiu confirmara sua aplicabilidade. Verifica-se que os modelos propostos são mais práticos e realísticos no estudo de soluções de melhoramento e de desenvolvimento de redes de estradas rurais em regiões montanhosas de países em desenvolvimento.