941 resultados para combinatorial optimisation
Resumo:
We study the preservation of the periodic orbits of an A-monotone tree map f:T→T in the class of all tree maps g:S→S having a cycle with the same pattern as A. We prove that there is a period-preserving injective map from the set of (almost all) periodic orbits of ƒ into the set of periodic orbits of each map in the class. Moreover, the relative positions of the corresponding orbits in the trees T and S (which need not be homeomorphic) are essentially preserved
Resumo:
Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.
Resumo:
AIMS: More than two billion people worldwide are deficient in key micronutrients. Single micronutrients have been used at high doses to prevent and treat dietary insufficiencies. Yet the impact of combinations of micronutrients in small doses aiming to improve lipid disorders and the corresponding metabolic pathways remains incompletely understood. Thus, we investigated whether a combination of micronutrients would reduce fat accumulation and atherosclerosis in mice. METHODS AND RESULTS: Lipoprotein receptor-null mice fed with an original combination of micronutrients incorporated into the daily chow showed reduced weight gain, body fat, plasma triglycerides, and increased oxygen consumption. These effects were achieved through enhanced lipid utilization and reduced lipid accumulation in metabolic organs and were mediated, in part, by the nuclear receptor PPARα. Moreover, the micronutrients partially prevented atherogenesis when administered early in life to apolipoprotein E-null mice. When the micronutrient treatment was started before conception, the anti-atherosclerotic effect was stronger in the progeny. This finding correlated with decreased post-prandial triglyceridaemia and vascular inflammation, two major atherogenic factors. CONCLUSION: Our data indicate beneficial effects of a combination of micronutritients on body weight gain, hypertriglyceridaemia, liver steatosis, and atherosclerosis in mice, and thus our findings suggest a novel cost-effective combinatorial micronutrient-based strategy worthy of being tested in humans.
Resumo:
Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.
Resumo:
BACKGROUND: Enteral nutrition (EN) is recommended for patients in the intensive-care unit (ICU), but it does not consistently achieve nutritional goals. We assessed whether delivery of 100% of the energy target from days 4 to 8 in the ICU with EN plus supplemental parenteral nutrition (SPN) could optimise clinical outcome. METHODS: This randomised controlled trial was undertaken in two centres in Switzerland. We enrolled patients on day 3 of admission to the ICU who had received less than 60% of their energy target from EN, were expected to stay for longer than 5 days, and to survive for longer than 7 days. We calculated energy targets with indirect calorimetry on day 3, or if not possible, set targets as 25 and 30 kcal per kg of ideal bodyweight a day for women and men, respectively. Patients were randomly assigned (1:1) by a computer-generated randomisation sequence to receive EN or SPN. The primary outcome was occurrence of nosocomial infection after cessation of intervention (day 8), measured until end of follow-up (day 28), analysed by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT00802503. FINDINGS: We randomly assigned 153 patients to SPN and 152 to EN. 30 patients discontinued before the study end. Mean energy delivery between day 4 and 8 was 28 kcal/kg per day (SD 5) for the SPN group (103% [SD 18%] of energy target), compared with 20 kcal/kg per day (7) for the EN group (77% [27%]). Between days 9 and 28, 41 (27%) of 153 patients in the SPN group had a nosocomial infection compared with 58 (38%) of 152 patients in the EN group (hazard ratio 0·65, 95% CI 0·43-0·97; p=0·0338), and the SPN group had a lower mean number of nosocomial infections per patient (-0·42 [-0·79 to -0·05]; p=0·0248). INTERPRETATION: Individually optimised energy supplementation with SPN starting 4 days after ICU admission could reduce nosocomial infections and should be considered as a strategy to improve clinical outcome in patients in the ICU for whom EN is insufficient. FUNDING: Foundation Nutrition 2000Plus, ICU Quality Funds, Baxter, and Fresenius Kabi.
Resumo:
Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.
Resumo:
Työssä tutkitaan menetelmiä, käytäntöjä ja oliosuunnittelumalleja jotka johtavat ohjelmistojen koon pienentymiseen. Työssä tutkitaan konkreettisia keinoja ohjelmistojen koon optimointiin Symbian-alustalla. Työ keskityy C++ ohjelmistoihin jotka on suunniteltu toimimaan matkapuhelimissa ja muissa langattomissa laitteissa. Työssä esitellään, analysoidaan ja optimoidaan todellinen, loppukäyttäjille suunnattu, langaton sovellus. Käytetyt optimointimenetelmät sekä saadut tulokset esitellään ja analysoidaan. Esimerkkisovelluksen toteuttamisesta kertyvien kokemusten perusteella esitetään suosituksia langattomaan sovelluskehitykseen. Hyvän teknisen arkkitehtuurisuunnitelman todettiin olevan merkittävässä roolissa. C++ -kielen luokkaperinnän huomattiin yllättäen olevan suurin ohjelmatiedostojen kokoon vaikuttava tekijä Symbian-käyttöjärjestelmässä. Pienten ohjelmien tuottamisessa vaaditaan taitoa ja kurinalaisuutta. Ohjelmistokehittäjien asenteet ovat yleensä suurin este sille. Monet ihmiset eivät vain välitä kirjoittaminen ohjelmiensa koosta.
Resumo:
Viime vuosien nopea kehitys on kiihdyttänyt uusien lääkkeiden kehittämisprosessia. Kombinatorinen kemia on tehnyt mahdolliseksi syntetisoida suuria kokoelmia rakenteeltaan toisistaan poikkeavia molekyylejä, nk. kombinatorisia kirjastoja, biologista seulontaa varten. Siinä molekyylien rakenteeseen liittyvä aktiivisuus tutkitaan useilla erilaisilla biologisilla testeillä mahdollisten "osumien" löytämiseksi, joista osasta saatetaan myöhemmin kehittää uusia lääkeaineita. Jotta biologisten tutkimusten tulokset olisivat luotettavia, on syntetisoitujen komponenttien oltava mahdollisimman puhtaita. Tämän vuoksi tarvitaan HTP-puhdistusta korkealaatuisten komponenttien ja luotettavan biologisen tiedon takaamiseksi. Jatkuvasti kasvavat tuotantovaatimukset ovat johtaneet näiden puhdistustekniikoiden automatisointiin ja rinnakkaistamiseen. Preparatiivinen LC/MS soveltuu kombinatoristen kirjastojen nopeaan ja tehokkaaseen puhdistamiseen. Monet tekijät, esimerkiksi erotuskolonnin ominaisuudet sekä virtausgradientti, vaikuttavat preparatiivisen LC/MS puhdistusprosessin tehokkuuteen. Nämä parametrit on optimoitava parhaan tuloksen saamiseksi. Tässä työssä tutkittiin emäksisiä komponentteja erilaisissa virtausolosuhteissa. Menetelmä kombinatoristen kirjastojen puhtaustason määrittämiseksi LC/MS-puhdistuksen jälkeen optimoitiin ja määritettiin puhtaus joillekin komponenteille eri kirjastoista ennen puhdistusta.
Resumo:
Several possible methods of increasing the efficiency and power of hydro power plants by improving the flow passages are investigated in this stydy. The theoretical background of diffuser design and its application to the optimisation of hydraulic turbine draft tubes is presented in the first part of this study. Several draft tube modernisation projects that have been carried out recently are discussed. Also, a method of increasing the efficiency of the draft tube by injecting a high velocity jet into the boundary layer is presented. Methods of increasing the head of a hydro power plant by using an ejector or a jet pump are discussed in the second part of this work. The theoretical principles of various ejector and jet pump types are presented and four different methods of calculating them are examined in more detail. A self-made computer code is used to calculate the gain in the head for two example power plants. Suitable ejector installations for the example plants are also discussed. The efficiency of the ejector power was found to be in the range 6 - 15 % for conventional head increasers, and 30 % for the jet pump at its optimum operating point. In practice, it is impossible to install an optimised jet pump with a 30 % efficiency into the draft tube as this would considerabely reduce the efficiency of the draft tube at normal operating conditions. This demonstrates, however, the potential for improvement which lies in conventional head increaser technology. This study is based on previous publications and on published test results. No actual laboratory measurements were made for this study. Certain aspects of modelling the flow in the draft tube using computational fluid dynamics are discussed in the final part of this work. The draft tube inlet velocity field is a vital boundary condition for such a calculation. Several previously measured velocity fields that have successfully been utilised in such flow calculations are presented herein.
Accelerated Microstructure Imaging via Convex Optimisation for regions with multiple fibres (AMICOx)
Resumo:
This paper reviews and extends our previous work to enable fast axonal diameter mapping from diffusion MRI data in the presence of multiple fibre populations within a voxel. Most of the existing mi-crostructure imaging techniques use non-linear algorithms to fit their data models and consequently, they are computationally expensive and usually slow. Moreover, most of them assume a single axon orientation while numerous regions of the brain actually present more complex configurations, e.g. fiber crossing. We present a flexible framework, based on convex optimisation, that enables fast and accurate reconstructions of the microstructure organisation, not limited to areas where the white matter is coherently oriented. We show through numerical simulations the ability of our method to correctly estimate the microstructure features (mean axon diameter and intra-cellular volume fraction) in crossing regions.
Resumo:
Chez les patients cancéreux, les cellules malignes sont souvent reconnues et détruites par les cellules T cytotoxiques du patient. C'est pourquoi, depuis plusieurs années, des recherches visent à produire des vaccins sensibilisant les cellules de l'immunité adaptative, afin de prévenir certains cancers. Bien que les vaccins ciblant les cellules T CD8+ (cytotoxiques) ont une efficacité in-vitro élevée, un vaccin pouvant cibler les cellules T CD8+ et CD4+ aurait une plus grande efficacité (1-3). En effet, les cellules T helper (CD4+) favorisent la production et la maintenance des cellules T CD8+ mémoires à longue durée de vie. Il existe un grand nombre de sous-types de cellules T CD4+ et leur action envers les cellules cancéreuses est différente. Par exemple, les lymphocytes Treg ont une activité pro-tumorale importante (4) et les lymphocytes Th1 ont une activité anti-tumorale (5). Cependant, le taux naturel des différents sous-types de cellules T CD4+ spécifiques aux antigènes tumoraux est variable. De plus, une certaine flexibilité des différents sous-types de cellules T CD4+ a été récemment démontrée (6). Celle-ci pourrait être ciblée par des protocoles de vaccination avec des antigènes tumoraux administrés conjointement à des adjuvants définis. Pour cela, il faut approfondir les connaissances sur le rôle des cellules T CD4+ spécifiques aux antigènes dans l'immunité anti-tumorale et connaître précisément la proportion des sous-types de cellules T CD4+ activées avant et après la vaccination. L'analyse des cellules T, par la cytométrie de flux, est très souvent limité par le besoin d'un nombre très élevé de cellules pour l'analyse de l'expression protéique. Or dans l'analyse des cellules T CD4+ spécifiques aux antigènes tumoraux cette technique n'est souvent pas applicable, car ces cellules sont présentes en très faible quantité dans le sang et dans les tissus tumoraux. C'est pourquoi, une approche basée sur l'analyse de la cellule T individuelle a été mise en place afin d'étudier l'expression du profil génétique des cellules T CD8+ et CD4+. (7,8) Méthode : Ce nouveau protocole (« single cell ») a été élaboré à partir d'une modification du protocole PCR-RT, qui permet la détection spécifique de l'ADN complémentaire (ADNc) après la transcription globale de l'ARN messager (ARNm) exprimé par une cellule T individuelle. Dans ce travail, nous optimisons cette nouvelle technique d'analyse pour les cellules T CD4+, en sélectionnant les meilleures amorces. Tout d'abord, des clones à profils fonctionnels connus sont générés par cytométrie de flux à partir de cellules T CD4+ d'un donneur sain. Pour cette étape d'optimisation des amorces, la spécificité des cellules T CD4+ n'est pas prise en considération. Il est, donc, possible d'étudier et de trier ces clones par cytométrie de flux. Ensuite, grâce au protocole « single cell », nous testons par PCR les amorces des différents facteurs spécifiques de chaque sous-type des T CD4+ sur des aliquotes issus d'une cellule provenant des clones générés. Nous sélectionnons les amorces dont la sensibilité, la spécificité ainsi que les valeurs prédictives positives et négatives des tests sont les meilleures. (9) Conclusion : Durant ce travail nous avons généré de l'ADNc de cellules T individuelles et sélectionné douze paires d'amorces pour l'identification des sous-types de cellules T CD4+ par la technique d'analyse PCR « single cell ». Les facteurs spécifiques aux cellules Th2 : IL-4, IL-5, IL-13, CRTh2, GATA3 ; les facteurs spécifiques aux cellules Th1 : TNFα, IL-2 ; les facteurs spécifiques aux cellules Treg : FOXP3, IL-2RA ; les facteurs spécifiques aux cellules Th17 : RORC, CCR6 et un facteur spécifique aux cellules naïves : CCR7. Ces amorces peuvent être utilisées dans le futur en combinaison avec des cellules antigènes-spécifiques triées par marquage des multimères pMHCII. Cette méthode permettra de comprendre le rôle ainsi que l'amplitude et la diversité fonctionnelle de la réponse de la cellule T CD4+ antigène-spécifique dans les cancers et dans d'autres maladies. Cela afin d'affiner les recherches en immunothérapie oncologique. (8)
Resumo:
Data traffic caused by mobile advertising client software when it is communicating with the network server can be a pain point for many application developers who are considering advertising-funded application distribution, since the cost of the data transfer might scare their users away from using the applications. For the thesis project, a simulation environment was built to mimic the real client-server solution for measuring the data transfer over varying types of connections with different usage scenarios. For optimising data transfer, a few general-purpose compressors and XML-specific compressors were tried for compressing the XML data, and a few protocol optimisations were implemented. For optimising the cost, cache usage was improved and pre-loading was enhanced to use free connections to load the data. The data traffic structure and the various optimisations were analysed, and it was found that the cache usage and pre-loading should be enhanced and that the protocol should be changed, with report aggregation and compression using WBXML or gzip.
Resumo:
Fine powders of minerals are used commonly in the paper and paint industry, and for ceramics. Research for utilizing of different waste materials in these applications is environmentally important. In this work, the ultrafine grinding of two waste gypsum materials, namely FGD (Flue Gas Desulphurisation) gypsum and phosphogypsum from a phosphoric acid plant, with the attrition bead mill and with the jet mill has been studied. The ' objective of this research was to test the suitability of the attrition bead mill and of the jet mill to produce gypsum powders with a particle size of a few microns. The grinding conditions were optimised by studying the influences of different operational grinding parameters on the grinding rate and on the energy consumption of the process in order to achieve a product fineness such as that required in the paper industry with as low energy consumption as possible. Based on experimental results, the most influential parameters in the attrition grinding were found to be the bead size, the stirrer type, and the stirring speed. The best conditions, based on the product fineness and specific energy consumption of grinding, for the attrition grinding process is to grind the material with small grinding beads and a high rotational speed of the stirrer. Also, by using some suitable grinding additive, a finer product is achieved with a lower energy consumption. In jet mill grinding the most influential parameters were the feed rate, the volumetric flow rate of the grinding air, and the height of the internal classification tube. The optimised condition for the jet is to grind with a small feed rate and with a large rate of volumetric flow rate of grinding air when the inside tube is low. The finer product with a larger rate of production was achieved with the attrition bead mill than with the jet mill, thus the attrition grinding is better for the ultrafine grinding of gypsum than the jet grinding. Finally the suitability of the population balance model for simulation of grinding processes has been studied with different S , B , and C functions. A new S function for the modelling of an attrition mill and a new C function for the modelling of a jet mill were developed. The suitability of the selected models with the developed grinding functions was tested by curve fitting the particle size distributions of the grinding products and then comparing the fitted size distributions to the measured particle sizes. According to the simulation results, the models are suitable for the estimation and simulation of the studied grinding processes.