981 resultados para optimisation model
Resumo:
This paper reports on continuing research into the modelling of an order picking process within a Crossdocking distribution centre using Simulation Optimisation. The aim of this project is to optimise a discrete event simulation model and to understand factors that affect finding its optimal performance. Our initial investigation revealed that the precision of the selected simulation output performance measure and the number of replications required for the evaluation of the optimisation objective function through simulation influences the ability of the optimisation technique. We experimented with Common Random Numbers, in order to improve the precision of our simulation output performance measure, and intended to use the number of replications utilised for this purpose as the initial number of replications for the optimisation of our Crossdocking distribution centre simulation model. Our results demonstrate that we can improve the precision of our selected simulation output performance measure value using Common Random Numbers at various levels of replications. Furthermore, after optimising our Crossdocking distribution centre simulation model, we are able to achieve optimal performance using fewer simulations runs for the simulation model which uses Common Random Numbers as compared to the simulation model which does not use Common Random Numbers.
Resumo:
Research of advanced technologies for energy generation contemplates a series of alternatives that are introduced both in the investigation of new energy sources and in the improvement and/or development of new components and systems. Even though significant reductions are observed in the amount of emissions, the proposed alternatives require the use of exhaust gases cleaning systems. The results of environmental analyses based on two configurations proposed for urban waste incineration are presented in this paper; the annexation of integer (Boolean) variables to the environomic model makes it possible to define the best gas cleaning routes based on exergetic cost minimisation criteria. In this first part, the results for steam cogeneration system analysis associated with the incineration of municipal solid wastes (MSW) is presented. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.
Resumo:
Loss of magnetic medium solids from dense medium circuits is a substantial contributor to operating cost. Much of this loss is by way of wet drum magnetic separator effluent. A model of the separator would be useful for process design, optimisation and control. A review of the literature established that although various rules of thumb exist, largely based on empirical or anecdotal evidence, there is no model of magnetics recovery in a wet drum magnetic separator which includes as inputs all significant machine and operating variables. A series of trials, in both factorial experiments and in single variable experiments, was therefore carried out using a purpose built rig which featured a small industrial scale (700 mm lip length, 900 mm diameter) wet drum magnetic separator. A substantial data set of 191 trials was generated in the work. The results of the factorial experiments were used to identify the variables having a significant effect on magnetics recovery. Observations carried out as an adjunct to this work, as well as magnetic theory, suggests that the capture of magnetic particles in the wet drum magnetic separator is by a flocculation process. Such a process should be defined by a flocculation rate and a flocculation time; the latter being defined by the volumetric flowrate and the volume within the separation zone. A model based on this concept and containing adjustable parameters was developed. This model was then fitted to a randomly chosen 80% of the data, and validated by application to the remaining 20%. The model is shown to provide a satisfactory fit to the data over three orders of magnitude of magnetics loss. (C) 2003 Elsevier Science BY. All rights reserved.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
Les progrès de la thérapie antirétrovirale ont transformé l'infection par le VIH d'une condition inévitablement fatale à une maladie chronique. En dépit de ce succès, l'échec thérapeutique et la toxicité médicamenteuse restent fréquents. Une réponse inadéquate au traitement est clairement multifactorielle et une individualisation de la posologie des médicaments qui se baserait sur les facteurs démographiques et génétiques des patients et sur les taux sanguins totaux, libres et/ou cellulaires des médicaments pourrait améliorer à la fois l'efficacité et la tolérance de la thérapie, cette dernière étant certainement un enjeu majeur pour un traitement qui se prend à vie.L'objectif global de cette thèse était de mieux comprendre les facteurs pharmacocinétiques (PK) et pharmacogénétiques (PG) influençant l'exposition aux médicaments antirétroviraux (ARVs) nous offrant ainsi une base rationnelle pour l'optimisation du traitement antiviral et pour l'ajustement posologique des médicaments chez les patients VIH-positifs. Une thérapie antirétrovirale adaptée au patient est susceptible d'augmenter la probabilité d'efficacité et de tolérance à ce traitement, permettant ainsi une meilleure compliance à long terme, et réduisant le risque d'émergence de résistance et d'échec thérapeutique.A cet effet, des méthodes de quantification des concentrations plasmatiques totales, libres et cellulaires des ARVs ainsi que de certains de leurs métabolites ont été développées et validées en utilisant la chromatographie liquide coupée à la spectrométrie de masse en tandem. Ces méthodes ont été appliquées pour la surveillance des taux d'ARVs dans diverses populations de patients HIV-positifs. Une étude clinique a été initiée dans le cadre de l'étude VIH Suisse de cohorte mère-enfant afin de déterminer si la grossesse influence la cinétique des ARVs. Les concentrations totales et libres du lopînavir, de l'atazanavir et de la névirapine ont été déterminées chez les femmes enceintes suivies pendant leur grossesse, et celles-ci ont été trouvées non influencées de manière cliniquement significative par la grossesse. Un ajustement posologique de ces ARVs n'est donc pas nécessaire chez les femmes enceintes. Lors d'une petite étude chez des patients HIV- positifs expérimentés, la corrélation entre l'exposition cellulaire et plasmatique des nouveaux ARVs, notamment le raltégravir, a été déterminée. Une bonne corrélation a été obtenue entre taux plasmatiques et cellulaires de raltégravir, suggérant que la surveillance des taux totaux est un substitut satisfaisant. Cependant, une importante variabilité inter¬patient a été observée dans les ratios d'accumulation cellulaire du raltégravir, ce qui devrait encourager des investigations supplémentaires chez les patients en échec sous ce traitement. L'efficacité du suivi thérapeutique des médicaments (TDM) pour l'adaptation des taux d'efavirenz chez des patients avec des concentrations au-dessus de la cible thérapeutique recommandée a été évaluée lors d'une étude prospective. L'adaptation des doses d'efavirenz basée sur le TDM s'est montrée efficace et sûre, soutenant l'utilisation du TDM chez les patients avec concentrations hors cible thérapeutique. L'impact des polymorphismes génétiques des cytochromes P450 (CYP) 2B6, 2A6 et 3A4/5 sur la pharmacocinétique de l'efavirenz et de ces métabolites a été étudié : un modèle de PK de population intégrant les covariats génétiques et démographiques a été construit. Les variations génétiques fonctionnelles dans les voies de métabolisation principales (CYP2B6) et accessoires {CYP2A6et 3A4/S) de l'efavirenz ont un impact sur sa disposition, et peuvent mener à des expositions extrêmes au médicament. Un? ajustement des doses guidé par le TDM est donc recommandé chez ces patients, en accord avec les polymorphismes génétiques.Ainsi, nous avons démonté qu'en utilisant une approche globale tenant compte à la fois des facteurs PK et PG influençant l'exposition aux ARVs chez les patients infectés, il est possible, si nécessaire, d'individualiser la thérapie antirétrovirale dans des situations diverses. L'optimisation du traitement antirétroviral contribue vraisemblablement à une meilleure efficacité thérapeutique à iong terme tout en réduisant la survenue d'effets indésirables.Résumé grand publicOptimisation de la thérapie antirétrovirale: approches pharmacocinétiques et pharmacogénétiquesLes progrès effectués dans le traitement de l'infection par le virus de llmmunodéficienoe humaine acquise (VIH) ont permis de transformer une affection mortelle en une maladie chronique traitable avec des médicaments de plus en plus efficaces. Malgré ce succès, un certain nombre de patients ne répondent pas de façon optimale à leur traitement etyou souffrent d'effets indésirables médicamenteux entraînant de fréquentes modifications dans leur thérapie. Il a été possible de mettre en évidence que l'efficacité d'un traitement antirétroviral est dans la plupart des cas corrélée aux concentrations de médicaments mesurées dans le sang des patients. Cependant, le virus se réplique dans la cellule, et seule la fraction des médicaments non liée aux protéines du plasma sanguin peut entrer dans la cellule et exercer l'activité antirétrovirale au niveau cellulaire. Il existe par ailleurs une importante variabilité des concentrations sanguines de médicament chez des patients prenant pourtant la même dose de médicament. Cette variabilité peut être due à des facteurs démographiques et/ou génétiques susceptibles d'influencer la réponse au traitement antirétroviral.Cette thèse a eu pour objectif de mieux comprendre les facteurs pharmacologiques et génétiques influençant l'efficacité et ta toxicité des médicaments antirétroviraux, dans le but d'individualiser la thérapie antivirale et d'améliorer le suivi des patients HIV-positifs.A cet effet, des méthodes de dosage très sensibles ont été développées pour permettre la quantification des médicaments antirétroviraux dans le sang et les cellules. Ces méthodes analytiques ont été appliquées dans le cadre de diverses études cliniques réalisées avec des patients. Une des études cliniques a recherché s'il y avait un impact des changements physiologiques liés à la grossesse sur les concentrations des médicaments antirétroviraux. Nous avons ainsi pu démontrer que la grossesse n'influençait pas de façon cliniquement significative le devenir des médicaments antirétroviraux chez les femmes enceintes HIV- positives. La posologie de médicaments ne devrait donc pas être modifiée dans cette population de patientes. Par ailleurs, d'autres études ont portés sur les variations génétiques des patients influençant l'activité enzymatique des protéines impliquées dans le métabolisme des médicaments antirétroviraux. Nous avons également étudié l'utilité d'une surveillance des concentrations de médicament (suivi thérapeutique) dans le sang des patients pour l'individualisation des traitements antiviraux. Il a été possible de mettre en évidence des relations significatives entre l'exposition aux médicaments antirétroviraux et l'existence chez les patients de certaines variations génétiques. Nos analyses ont également permis d'étudier les relations entre les concentrations dans le sang des patients et les taux mesurés dans les cellules où le virus HIV se réplique. De plus, la mesure des taux sanguins de médicaments antirétroviraux et leur interprétation a permis d'ajuster la posologie de médicaments chez les patients de façon efficace et sûre.Ainsi, la complémentarité des connaissances pharmacologiques, génétiques et virales s'inscrit dans l'optique d'une stratégie globale de prise en charge du patient et vise à l'individualisation de la thérapie antirétrovirale en fonction des caractéristiques propres de chaque individu. Cette approche contribue ainsi à l'optimisation du traitement antirétroviral dans la perspective d'un succès du traitement à long terme tout en réduisant la probabilité des effets indésirables rencontrés. - The improvement in antirétroviral therapy has transformed HIV infection from an inevitably fatal condition to a chronic, manageable disease. However, treatment failure and drug toxicity are frequent. Inadequate response to treatment is clearly multifactorial and, therefore, dosage individualisation based on demographic factors, genetic markers and measurement of total, free and/or cellular drug level may increase both drug efficacy and tolerability. Drug tolerability is certainly a major issue for a treatment that must be taken indefinitely.The global objective of this thesis aimed at increasing our current understanding of pharmacokinetic (PK) and pharmacogenetic (PG) factors influencing the exposition to antirétroviral drugs (ARVs) in HIV-positive patients. In turn, this should provide us with a rational basis for antiviral treatment optimisation and drug dosage adjustment in HIV- positive patients. Patient's tailored antirétroviral regimen is likely to enhance treatment effectiveness and tolerability, enabling a better compliance over time, and hence reducing the probability of emergence of viral resistance and treatment failure.To that endeavour, analytical methods for the measurement of total plasma, free and cellular concentrations of ARVs and some of their metabolites have been developed and validated using liquid chromatography coupled with tandem mass spectrometry. These assays have been applied for the monitoring of ARVs levels in various populations of HIV- positive patients. A clinical study has been initiated within the frame of the Mother and Child Swiss HIV Cohort Study to determine whether pregnancy influences the exposition to ARVs. Free and total plasma concentrations of lopinavir, atazanavir and nevirapine have been determined in pregnant women followed during the course of pregnancy, and were found not influenced to a clinically significant extent by pregnancy. Dosage adjustment for these drugs is therefore not required in pregnant women. In a study in treatment- experienced HIV-positive patients, the correlation between cellular and total plasma exposure to new antirétroviral drugs, notably the HIV integrase inhibitor raltegravir, has been determined. A good correlation was obtained between total and cellular levels of raltegravir, suggesting that monitoring of total levels are a satisfactory. However, significant inter-patient variability was observed in raltegravir cell accumulation which should prompt further investigations in patients failing under an integrase inhibitor-based regimen. The effectiveness of therapeutic drug monitoring (TDM) to guide efavirenz dose reduction in patients having concentrations above the recommended therapeutic range was evaluated in a prospective study. TDM-guided dosage adjustment of efavirenz was found feasible and safe, supporting the use of TDM in patients with efavirenz concentrations above therapeutic target. The impact of genetic polymorphisms of cytochromes P450 (CYP) 2B6, 2A6 and 3A4/5 on the PK of efavirenz and its metabolites was studied: a population PK model was built integrating both genetic and demographic covariates. Functional genetic variations in main (CYP2B6) and accessory (2A6, 3A4/5) metabolic pathways of efavirenz have an impact on efavirenz disposition, and may lead to extreme drug exposures. Dosage adjustment guided by TDM is thus required in those patients, according to the pharmacogenetic polymorphism.Thus, we have demonstrated, using a comprehensive approach taking into account both PK and PG factors influencing ARVs exposure in HIV-infected patients, the feasibility of individualising antirétroviral therapy in various situations. Antiviral treatment optimisation is likely to increase long-term treatment success while reducing the occurrence of adverse drug reactions.
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
We propose a method to analyse the 2009 outbreak in the region of Botucatu in the state of São Paulo (SP), Brazil, when 28 yellow fever (YF) cases were confirmed, including 11 deaths. At the time of the outbreak, the Secretary of Health of the State of São Paulo vaccinated one million people, causing the death of five individuals, an unprecedented number of YF vaccine-induced fatalities. We apply a mathematical model described previously to optimise the proportion of people who should be vaccinated to minimise the total number of deaths. The model was used to calculate the optimum proportion that should be vaccinated in the remaining, vaccine-free regions of SP, considering the risk of vaccine-induced fatalities and the risk of YF outbreaks in these regions.
Resumo:
The paper presents a novel method for monitoring network optimisation, based on a recent machine learning technique known as support vector machine. It is problem-oriented in the sense that it directly answers the question of whether the advised spatial location is important for the classification model. The method can be used to increase the accuracy of classification models by taking a small number of additional measurements. Traditionally, network optimisation is performed by means of the analysis of the kriging variances. The comparison of the method with the traditional approach is presented on a real case study with climate data.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
Fine powders of minerals are used commonly in the paper and paint industry, and for ceramics. Research for utilizing of different waste materials in these applications is environmentally important. In this work, the ultrafine grinding of two waste gypsum materials, namely FGD (Flue Gas Desulphurisation) gypsum and phosphogypsum from a phosphoric acid plant, with the attrition bead mill and with the jet mill has been studied. The ' objective of this research was to test the suitability of the attrition bead mill and of the jet mill to produce gypsum powders with a particle size of a few microns. The grinding conditions were optimised by studying the influences of different operational grinding parameters on the grinding rate and on the energy consumption of the process in order to achieve a product fineness such as that required in the paper industry with as low energy consumption as possible. Based on experimental results, the most influential parameters in the attrition grinding were found to be the bead size, the stirrer type, and the stirring speed. The best conditions, based on the product fineness and specific energy consumption of grinding, for the attrition grinding process is to grind the material with small grinding beads and a high rotational speed of the stirrer. Also, by using some suitable grinding additive, a finer product is achieved with a lower energy consumption. In jet mill grinding the most influential parameters were the feed rate, the volumetric flow rate of the grinding air, and the height of the internal classification tube. The optimised condition for the jet is to grind with a small feed rate and with a large rate of volumetric flow rate of grinding air when the inside tube is low. The finer product with a larger rate of production was achieved with the attrition bead mill than with the jet mill, thus the attrition grinding is better for the ultrafine grinding of gypsum than the jet grinding. Finally the suitability of the population balance model for simulation of grinding processes has been studied with different S , B , and C functions. A new S function for the modelling of an attrition mill and a new C function for the modelling of a jet mill were developed. The suitability of the selected models with the developed grinding functions was tested by curve fitting the particle size distributions of the grinding products and then comparing the fitted size distributions to the measured particle sizes. According to the simulation results, the models are suitable for the estimation and simulation of the studied grinding processes.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Le sang provenant d’un cordon ombilical (SCO) représente une bonne source de cellules souches hématopoïétiques (CSH) pour des transplantations. Cependant, le nombre de cellules souches contenues dans ce sang est souvent insuffisant pour greffer un adulte. Le mécanisme intervenant dans la domiciliation de ces cellules au sein de la moelle osseuse (MO) est encore mal compris. On sait que l’interaction entre la chimiokine SDF-1 et le récepteur CXCR4, présent sur les cellules CD34+ de SCO, mène à la migration de ces cellules en direction de la MO. Nous pensons que l’augmentation de la proportion de cellules qui réussit à se greffer pourra pallier au problème du nombre. Les produits de dégradation, C3a et le C3desarg,, issus du système du complément, sont connus pour favoriser la réponse de cellules exprimant CXCR4 vers SDF-1. Nous avons analysé l’effet du C3adesarg, molécule non anaphylatoxique, sur la migration cellulaire vers SDF-1, de même que sur la prise de greffe des cellules CD34+ issues de SCO suite à une transplantation sur des souris NOD/SCIDyC-. Nos expériences ont démontré que le C3a ainsi que le C3adesarg augmentaient tous les deux la réponse des cellules CD34+ vers SDF-1. Toutefois, nous n’avons pas pu démontrer que ces molécules liaient directement le récepteur CXCR4. Par contre, le composé C3adesarg favorise la prise de greffe des cellules CD34+ de SCO. Il serait donc un bon candidat pour poursuivre une optimisation de ses propriétés. Nous avons également constaté que suite à une transplantation chez la souris, les cellules CD34+ de SCO subissent une hausse d’expression transitoire de leur CXCR4 environ quatre jours après la greffe. Cette hausse d’expression coïncide avec la multiplication des cellules CD34+ dans la MO. Nous avons également confirmé qu’une cellule CD34+ avec une forte expression de CXCR4 était dans un état prolifératif. Nos données suggèrent que l’interaction directe avec les cellules stromales soit responsable de cette hausse d’expression de CXCR4.
Resumo:
Ce travail de thèse porte sur l’application de la pharmacocinétique de population dans le but d’optimiser l’utilisation de certains médicaments chez les enfants immunosupprimés et subissant une greffe. Parmi les différents médicaments utilisés chez les enfants immunosupprimés, l’utilisation du busulfan, du tacrolimus et du voriconazole reste problématique, notamment à cause d’une très grande variabilité interindividuelle de leur pharmacocinétique rendant nécessaire l’individualisation des doses par le suivi thérapeutique pharmacologique. De plus, ces médicaments n’ont pas fait l’objet d’études chez les enfants et les doses sont adaptées à partir des adultes. Cette dernière pratique ne prend pas en compte les particularités pharmacologiques qui caractérisent l’enfant tout au long de son développement et rend illusoire l’extrapolation aux enfants des données acquises chez les adultes. Les travaux effectués dans le cadre de cette thèse ont étudié successivement la pharmacocinétique du busulfan, du voriconazole et du tacrolimus par une approche de population en une étape (modèles non-linéaires à effets mixtes). Ces modèles ont permis d’identifier les principales sources de variabilités interindividuelles sur les paramètres pharmacocinétiques. Les covariables identifiées sont la surface corporelle et le poids. Ces résultats confirment l’importance de tenir en compte l’effet de la croissance en pédiatrie. Ces paramètres ont été inclus de façon allométrique dans les modèles. Cette approche permet de séparer l’effet de la mesure anthropométrique d’autres covariables et permet la comparaison des paramètres pharmacocinétiques en pédiatrie avec ceux des adultes. La prise en compte de ces covariables explicatives devrait permettre d’améliorer la prise en charge a priori des patients. Ces modèles développés ont été évalués pour confirmer leur stabilité, leur performance de simulation et leur capacité à répondre aux objectifs initiaux de la modélisation. Dans le cas du busulfan, le modèle validé a été utilisé pour proposer par simulation une posologie qui améliorerait l’atteinte de l’exposition cible, diminuerait l’échec thérapeutique et les risques de toxicité. Le modèle développé pour le voriconazole, a permis de confirmer la grande variabilité interindividuelle dans sa pharmacocinétique chez les enfants immunosupprimés. Le nombre limité de patients n’a pas permis d’identifier des covariables expliquant cette variabilité. Sur la base du modèle de pharmacocinétique de population du tacrolimus, un estimateur Bayesien a été mis au point, qui est le premier dans cette population de transplantés hépatiques pédiatriques. Cet estimateur permet de prédire les paramètres pharmacocinétiques et l’exposition individuelle au tacrolimus sur la base d’un nombre limité de prélèvements. En conclusion, les travaux de cette thèse ont permis d’appliquer la pharmacocinétique de population en pédiatrie pour explorer les caractéristiques propres à cette population, de décrire la variabilité pharmacocinétique des médicaments utilisés chez les enfants immunosupprimés, en vue de l’individualisation du traitement. Les outils pharmacocinétiques développés s’inscrivent dans une démarche visant à diminuer le taux d'échec thérapeutique et l’incidence des effets indésirables ou toxiques chez les enfants immunosupprimés suite à une transplantation.