982 resultados para optimisation methods
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Originally from Asia, Dovyalis hebecarpa is a dark purple/red exotic berry now also produced in Brazil. However, no reports were found in the literature about phenolic extraction or characterisation of this berry. In this study we evaluate the extraction optimisation of anthocyanins and total phenolics in D. hebecarpa berries aiming at the development of a simple and mild analytical technique. Multivariate analysis was used to optimise the extraction variables (ethanol:water:acetone solvent proportions, times, and acid concentrations) at different levels. Acetone/water (20/80 v/v) gave the highest anthocyanin extraction yield, but pure water and different proportions of acetone/water or acetone/ethanol/water (with >50% of water) were also effective. Neither acid concentration nor time had a significant effect on extraction efficiency allowing to fix the recommended parameters at the lowest values tested (0.35% formic acid v/v, and 17.6 min). Under optimised conditions, extraction efficiencies were increased by 31.5% and 11% for anthocyanin and total phenolics, respectively as compared to traditional methods that use more solvent and time. Thus, the optimised methodology increased yields being less hazardous and time consuming than traditional methods. Finally, freeze-dried D. hebecarpa showed high content of target phytochemicals (319 mg/100g and 1,421 mg/100g of total anthocyanin and total phenolic content, respectively).
Resumo:
The activated sludge comprises a complex microbiological community. The structure (what types of microorganisms are present) and function (what can the organisms do and at what rates) of this community are determined by external physico -chemical features and by the influent to the sewage treatment plant. The external features we can manipulate but rarely the influent. Conventional control and operational strategies optimise activated sludge processes more as a chemical system than as a biological one. While optimising the process in a short time period, these strategies may deteriorate the long-term performance of the process due to their potentially adverse impact on the microbial properties. Through briefly reviewing the evidence available in the literature that plant design and operation affect both the structure and function of the microbial community in activated sludge, we propose to add sludge population optimisation as a new dimension to the control of biological wastewater treatment systems. We stress that optimising the microbial community structure and property should be an explicit aim for the design and operation of a treatment plant. The major limitations to sludge population optimisation revolve around inadequate microbiological data, specifically community structure, function and kinetic data. However, molecular microbiological methods that strive to provide that data are being developed rapidly. The combination of these methods with the conventional approaches for kinetic study is briefly discussed. The most pressing research questions pertaining to sludge population optimisation are outlined. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem. The multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve. The task is to reserve a subset of sites that best meet this objective. We use data on the distribution of habitats in the Northern Territory, Australia, to show how simulated annealing and a greedy heuristic algorithm can be used to generate good solutions to such large reserve design problems, and to compare the effectiveness of these methods.
Resumo:
This review aims to identify strategies to optimise radiography practice using digital technologies, for full spine studies on paediatrics focusing particularly on methods used to diagnose and measure severity of spinal curvatures. The literature search was performed on different databases (PubMed, Google Scholar and ScienceDirect) and relevant websites (e.g., American College of Radiology and International Commission on Radiological Protection) to identify guidelines and recent studies focused on dose optimisation in paediatrics using digital technologies. Plain radiography was identified as the most accurate method. The American College of Radiology (ACR) and European Commission (EC) provided two guidelines that were identified as the most relevant to the subject. The ACR guidelines were updated in 2014; however these guidelines do not provide detailed guidance on technical exposure parameters. The EC guidelines are more complete but are dedicated to screen film systems. Other studies provided reviews on the several exposure parameters that should be included for optimisation, such as tube current, tube voltage and source-to-image distance; however, only explored few of these parameters and not all of them together. One publication explored all parameters together but this was for adults only. Due to lack of literature on exposure parameters for paediatrics, more research is required to guide and harmonise practice.
Resumo:
Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters: positions (antero-posterior (AP), posteroanterior (PA) and lateral), kilo-voltage peak (kVp) (66-90), source-to-image distance (SID) (150 to 200cm), broad focus and the use of a grid (grid in/out) to analyse the impact on E and image quality (IQ). IQ was analysed applying two approaches: objective [contrast-to-noise-ratio/(CNR] and perceptual, using 5 observers. Monte-Carlo modelling was used for dose estimation. Cohen’s Kappa coefficient was used to calculate inter-observer-variability. The angle was measured using Cobb’s method on lateral projections under different imaging conditions. Results: PA promoted the lowest effective dose (0.013 mSv) compared to AP (0.048 mSv) and lateral (0.025 mSv). The exposure parameters that allowed lower dose were 200cm SID, 90 kVp, broad focus and grid out for paediatrics using an Agfa CR system. Thirty-seven images were assessed for IQ and thirty-two were classified adequate. Cobb angle measurements varied between 16°±2.9 and 19.9°±0.9. Conclusion: Cobb angle measurements can be performed using the lowest dose with a low contrast-tonoise ratio. The variation on measurements for this was ±2.9° and this is within the range of acceptable clinical error without impact on clinical diagnosis. Further work is recommended on improvement to the sample size and a more robust perceptual IQ assessment protocol for observers.
Resumo:
Thermal systems interchanging heat and mass by conduction, convection, radiation (solar and thermal ) occur in many engineering applications like energy storage by solar collectors, window glazing in buildings, refrigeration of plastic moulds, air handling units etc. Often these thermal systems are composed of various elements for example a building with wall, windows, rooms, etc. It would be of particular interest to have a modular thermal system which is formed by connecting different modules for the elements, flexibility to use and change models for individual elements, add or remove elements without changing the entire code. A numerical approach to handle the heat transfer and fluid flow in such systems helps in saving the full scale experiment time, cost and also aids optimisation of parameters of the system. In subsequent sections are presented a short summary of the work done until now on the orientation of the thesis in the field of numerical methods for heat transfer and fluid flow applications, the work in process and the future work.
Resumo:
Les progrès de la thérapie antirétrovirale ont transformé l'infection par le VIH d'une condition inévitablement fatale à une maladie chronique. En dépit de ce succès, l'échec thérapeutique et la toxicité médicamenteuse restent fréquents. Une réponse inadéquate au traitement est clairement multifactorielle et une individualisation de la posologie des médicaments qui se baserait sur les facteurs démographiques et génétiques des patients et sur les taux sanguins totaux, libres et/ou cellulaires des médicaments pourrait améliorer à la fois l'efficacité et la tolérance de la thérapie, cette dernière étant certainement un enjeu majeur pour un traitement qui se prend à vie.L'objectif global de cette thèse était de mieux comprendre les facteurs pharmacocinétiques (PK) et pharmacogénétiques (PG) influençant l'exposition aux médicaments antirétroviraux (ARVs) nous offrant ainsi une base rationnelle pour l'optimisation du traitement antiviral et pour l'ajustement posologique des médicaments chez les patients VIH-positifs. Une thérapie antirétrovirale adaptée au patient est susceptible d'augmenter la probabilité d'efficacité et de tolérance à ce traitement, permettant ainsi une meilleure compliance à long terme, et réduisant le risque d'émergence de résistance et d'échec thérapeutique.A cet effet, des méthodes de quantification des concentrations plasmatiques totales, libres et cellulaires des ARVs ainsi que de certains de leurs métabolites ont été développées et validées en utilisant la chromatographie liquide coupée à la spectrométrie de masse en tandem. Ces méthodes ont été appliquées pour la surveillance des taux d'ARVs dans diverses populations de patients HIV-positifs. Une étude clinique a été initiée dans le cadre de l'étude VIH Suisse de cohorte mère-enfant afin de déterminer si la grossesse influence la cinétique des ARVs. Les concentrations totales et libres du lopînavir, de l'atazanavir et de la névirapine ont été déterminées chez les femmes enceintes suivies pendant leur grossesse, et celles-ci ont été trouvées non influencées de manière cliniquement significative par la grossesse. Un ajustement posologique de ces ARVs n'est donc pas nécessaire chez les femmes enceintes. Lors d'une petite étude chez des patients HIV- positifs expérimentés, la corrélation entre l'exposition cellulaire et plasmatique des nouveaux ARVs, notamment le raltégravir, a été déterminée. Une bonne corrélation a été obtenue entre taux plasmatiques et cellulaires de raltégravir, suggérant que la surveillance des taux totaux est un substitut satisfaisant. Cependant, une importante variabilité inter¬patient a été observée dans les ratios d'accumulation cellulaire du raltégravir, ce qui devrait encourager des investigations supplémentaires chez les patients en échec sous ce traitement. L'efficacité du suivi thérapeutique des médicaments (TDM) pour l'adaptation des taux d'efavirenz chez des patients avec des concentrations au-dessus de la cible thérapeutique recommandée a été évaluée lors d'une étude prospective. L'adaptation des doses d'efavirenz basée sur le TDM s'est montrée efficace et sûre, soutenant l'utilisation du TDM chez les patients avec concentrations hors cible thérapeutique. L'impact des polymorphismes génétiques des cytochromes P450 (CYP) 2B6, 2A6 et 3A4/5 sur la pharmacocinétique de l'efavirenz et de ces métabolites a été étudié : un modèle de PK de population intégrant les covariats génétiques et démographiques a été construit. Les variations génétiques fonctionnelles dans les voies de métabolisation principales (CYP2B6) et accessoires {CYP2A6et 3A4/S) de l'efavirenz ont un impact sur sa disposition, et peuvent mener à des expositions extrêmes au médicament. Un? ajustement des doses guidé par le TDM est donc recommandé chez ces patients, en accord avec les polymorphismes génétiques.Ainsi, nous avons démonté qu'en utilisant une approche globale tenant compte à la fois des facteurs PK et PG influençant l'exposition aux ARVs chez les patients infectés, il est possible, si nécessaire, d'individualiser la thérapie antirétrovirale dans des situations diverses. L'optimisation du traitement antirétroviral contribue vraisemblablement à une meilleure efficacité thérapeutique à iong terme tout en réduisant la survenue d'effets indésirables.Résumé grand publicOptimisation de la thérapie antirétrovirale: approches pharmacocinétiques et pharmacogénétiquesLes progrès effectués dans le traitement de l'infection par le virus de llmmunodéficienoe humaine acquise (VIH) ont permis de transformer une affection mortelle en une maladie chronique traitable avec des médicaments de plus en plus efficaces. Malgré ce succès, un certain nombre de patients ne répondent pas de façon optimale à leur traitement etyou souffrent d'effets indésirables médicamenteux entraînant de fréquentes modifications dans leur thérapie. Il a été possible de mettre en évidence que l'efficacité d'un traitement antirétroviral est dans la plupart des cas corrélée aux concentrations de médicaments mesurées dans le sang des patients. Cependant, le virus se réplique dans la cellule, et seule la fraction des médicaments non liée aux protéines du plasma sanguin peut entrer dans la cellule et exercer l'activité antirétrovirale au niveau cellulaire. Il existe par ailleurs une importante variabilité des concentrations sanguines de médicament chez des patients prenant pourtant la même dose de médicament. Cette variabilité peut être due à des facteurs démographiques et/ou génétiques susceptibles d'influencer la réponse au traitement antirétroviral.Cette thèse a eu pour objectif de mieux comprendre les facteurs pharmacologiques et génétiques influençant l'efficacité et ta toxicité des médicaments antirétroviraux, dans le but d'individualiser la thérapie antivirale et d'améliorer le suivi des patients HIV-positifs.A cet effet, des méthodes de dosage très sensibles ont été développées pour permettre la quantification des médicaments antirétroviraux dans le sang et les cellules. Ces méthodes analytiques ont été appliquées dans le cadre de diverses études cliniques réalisées avec des patients. Une des études cliniques a recherché s'il y avait un impact des changements physiologiques liés à la grossesse sur les concentrations des médicaments antirétroviraux. Nous avons ainsi pu démontrer que la grossesse n'influençait pas de façon cliniquement significative le devenir des médicaments antirétroviraux chez les femmes enceintes HIV- positives. La posologie de médicaments ne devrait donc pas être modifiée dans cette population de patientes. Par ailleurs, d'autres études ont portés sur les variations génétiques des patients influençant l'activité enzymatique des protéines impliquées dans le métabolisme des médicaments antirétroviraux. Nous avons également étudié l'utilité d'une surveillance des concentrations de médicament (suivi thérapeutique) dans le sang des patients pour l'individualisation des traitements antiviraux. Il a été possible de mettre en évidence des relations significatives entre l'exposition aux médicaments antirétroviraux et l'existence chez les patients de certaines variations génétiques. Nos analyses ont également permis d'étudier les relations entre les concentrations dans le sang des patients et les taux mesurés dans les cellules où le virus HIV se réplique. De plus, la mesure des taux sanguins de médicaments antirétroviraux et leur interprétation a permis d'ajuster la posologie de médicaments chez les patients de façon efficace et sûre.Ainsi, la complémentarité des connaissances pharmacologiques, génétiques et virales s'inscrit dans l'optique d'une stratégie globale de prise en charge du patient et vise à l'individualisation de la thérapie antirétrovirale en fonction des caractéristiques propres de chaque individu. Cette approche contribue ainsi à l'optimisation du traitement antirétroviral dans la perspective d'un succès du traitement à long terme tout en réduisant la probabilité des effets indésirables rencontrés. - The improvement in antirétroviral therapy has transformed HIV infection from an inevitably fatal condition to a chronic, manageable disease. However, treatment failure and drug toxicity are frequent. Inadequate response to treatment is clearly multifactorial and, therefore, dosage individualisation based on demographic factors, genetic markers and measurement of total, free and/or cellular drug level may increase both drug efficacy and tolerability. Drug tolerability is certainly a major issue for a treatment that must be taken indefinitely.The global objective of this thesis aimed at increasing our current understanding of pharmacokinetic (PK) and pharmacogenetic (PG) factors influencing the exposition to antirétroviral drugs (ARVs) in HIV-positive patients. In turn, this should provide us with a rational basis for antiviral treatment optimisation and drug dosage adjustment in HIV- positive patients. Patient's tailored antirétroviral regimen is likely to enhance treatment effectiveness and tolerability, enabling a better compliance over time, and hence reducing the probability of emergence of viral resistance and treatment failure.To that endeavour, analytical methods for the measurement of total plasma, free and cellular concentrations of ARVs and some of their metabolites have been developed and validated using liquid chromatography coupled with tandem mass spectrometry. These assays have been applied for the monitoring of ARVs levels in various populations of HIV- positive patients. A clinical study has been initiated within the frame of the Mother and Child Swiss HIV Cohort Study to determine whether pregnancy influences the exposition to ARVs. Free and total plasma concentrations of lopinavir, atazanavir and nevirapine have been determined in pregnant women followed during the course of pregnancy, and were found not influenced to a clinically significant extent by pregnancy. Dosage adjustment for these drugs is therefore not required in pregnant women. In a study in treatment- experienced HIV-positive patients, the correlation between cellular and total plasma exposure to new antirétroviral drugs, notably the HIV integrase inhibitor raltegravir, has been determined. A good correlation was obtained between total and cellular levels of raltegravir, suggesting that monitoring of total levels are a satisfactory. However, significant inter-patient variability was observed in raltegravir cell accumulation which should prompt further investigations in patients failing under an integrase inhibitor-based regimen. The effectiveness of therapeutic drug monitoring (TDM) to guide efavirenz dose reduction in patients having concentrations above the recommended therapeutic range was evaluated in a prospective study. TDM-guided dosage adjustment of efavirenz was found feasible and safe, supporting the use of TDM in patients with efavirenz concentrations above therapeutic target. The impact of genetic polymorphisms of cytochromes P450 (CYP) 2B6, 2A6 and 3A4/5 on the PK of efavirenz and its metabolites was studied: a population PK model was built integrating both genetic and demographic covariates. Functional genetic variations in main (CYP2B6) and accessory (2A6, 3A4/5) metabolic pathways of efavirenz have an impact on efavirenz disposition, and may lead to extreme drug exposures. Dosage adjustment guided by TDM is thus required in those patients, according to the pharmacogenetic polymorphism.Thus, we have demonstrated, using a comprehensive approach taking into account both PK and PG factors influencing ARVs exposure in HIV-infected patients, the feasibility of individualising antirétroviral therapy in various situations. Antiviral treatment optimisation is likely to increase long-term treatment success while reducing the occurrence of adverse drug reactions.
Resumo:
Single-stranded DNA (ssDNA) is a prerequisite for electrochemical sensor-based detection of parasite DNA and other diagnostic applications. To achieve this detection, an asymmetric polymerase chain reaction method was optimised. This method facilitates amplification of ssDNA from the human lymphatic filarial parasite Wuchereria bancrofti. This procedure produced ssDNA fragments of 188 bp in a single step when primer pairs (forward and reverse) were used at a 100:1 molar ratio in the presence of double-stranded template DNA. The ssDNA thus produced was suitable for immobilisation as probe onto the surface of an Indium tin oxide electrode and hybridisation in a system for sequence-specific electrochemical detection of W. bancrofti. The hybridisation of the ssDNA probe and target ssDNA led to considerable decreases in both the anodic and the cathodic currents of the system's redox couple compared with the unhybridised DNA and could be detected via cyclic voltammetry. This method is reproducible and avoids many of the difficulties encountered by conventional methods of filarial parasite DNA detection; thus, it has potential in xenomonitoring.
Resumo:
PURPOSE: To optimize conditions for photodynamic detection (PDD) and photodynamic therapy (PDT) of bladder carcinoma, urothelial accumulation of protoporphyrin IX (PpIX) and conditions leading to cell photodestruction were studied. MATERIALS AND METHODS: Porcine and human bladder mucosae were superfused with derivatives of 5-aminolevulinic acid (ALA). PpIX accumulation and distribution across the mucosa was studied by microspectrofluorometry. Cell viability and structural integrity were assessed by using vital dyes and microscopy. RESULTS: ALA esters, especially hexyl-ALA, accelerated and regularized urothelial PpIX accumulation and allowed for necrosis upon illumination. CONCLUSIONS: hexyl-ALA used at micromolar concentrations is the most efficient PpIX precursor for PDD and PDT.
Resumo:
Analytical results harmonisation is investigated in this study to provide an alternative to the restrictive approach of analytical methods harmonisation which is recommended nowadays for making possible the exchange of information and then for supporting the fight against illicit drugs trafficking. Indeed, the main goal of this study is to demonstrate that a common database can be fed by a range of different analytical methods, whatever the differences in levels of analytical parameters between these latter ones. For this purpose, a methodology making possible the estimation and even the optimisation of results similarity coming from different analytical methods was then developed. In particular, the possibility to introduce chemical profiles obtained with Fast GC-FID in a GC-MS database is studied in this paper. By the use of the methodology, the similarity of results coming from different analytical methods can be objectively assessed and the utility in practice of database sharing by these methods can be evaluated, depending on profiling purposes (evidential vs. operational perspective tool). This methodology can be regarded as a relevant approach for database feeding by different analytical methods and puts in doubt the necessity to analyse all illicit drugs seizures in one single laboratory or to implement analytical methods harmonisation in each participating laboratory.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.
Resumo:
BACKGROUND: Enteral nutrition (EN) is recommended for patients in the intensive-care unit (ICU), but it does not consistently achieve nutritional goals. We assessed whether delivery of 100% of the energy target from days 4 to 8 in the ICU with EN plus supplemental parenteral nutrition (SPN) could optimise clinical outcome. METHODS: This randomised controlled trial was undertaken in two centres in Switzerland. We enrolled patients on day 3 of admission to the ICU who had received less than 60% of their energy target from EN, were expected to stay for longer than 5 days, and to survive for longer than 7 days. We calculated energy targets with indirect calorimetry on day 3, or if not possible, set targets as 25 and 30 kcal per kg of ideal bodyweight a day for women and men, respectively. Patients were randomly assigned (1:1) by a computer-generated randomisation sequence to receive EN or SPN. The primary outcome was occurrence of nosocomial infection after cessation of intervention (day 8), measured until end of follow-up (day 28), analysed by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT00802503. FINDINGS: We randomly assigned 153 patients to SPN and 152 to EN. 30 patients discontinued before the study end. Mean energy delivery between day 4 and 8 was 28 kcal/kg per day (SD 5) for the SPN group (103% [SD 18%] of energy target), compared with 20 kcal/kg per day (7) for the EN group (77% [27%]). Between days 9 and 28, 41 (27%) of 153 patients in the SPN group had a nosocomial infection compared with 58 (38%) of 152 patients in the EN group (hazard ratio 0·65, 95% CI 0·43-0·97; p=0·0338), and the SPN group had a lower mean number of nosocomial infections per patient (-0·42 [-0·79 to -0·05]; p=0·0248). INTERPRETATION: Individually optimised energy supplementation with SPN starting 4 days after ICU admission could reduce nosocomial infections and should be considered as a strategy to improve clinical outcome in patients in the ICU for whom EN is insufficient. FUNDING: Foundation Nutrition 2000Plus, ICU Quality Funds, Baxter, and Fresenius Kabi.
Resumo:
Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.