965 resultados para OPTIMIZATION PROCESS
Resumo:
Madine Darby Canine Kidney (MDCK) cell lines have been extensively evaluated for their potential as host cells for influenza vaccine production. Recent studies allowed the cultivation of these cells in a fully defined medium and in suspension. However, reaching high cell densities in animal cell cultures still remains a challenge. To address this shortcoming, a combined methodology allied with knowledge from systems biology was reported to study the impact of the cell environment on the flux distribution. An optimization of the medium composition was proposed for both a batch and a continuous system in order to reach higher cell densities. To obtain insight into the metabolic activity of these cells, a detailed metabolic model previously developed by Wahl A. et. al was used. The experimental data of four cultivations of MDCK suspension cells, grown under different conditions and used in this work came from the Max Planck Institute, Magdeburg, Germany. Classical metabolic flux analysis (MFA) was used to estimate the intracellular flux distribution of each cultivation and then combined with partial least squares (PLS) method to establish a link between the estimated metabolic state and the cell environment. The validation of the MFA model was made and its consistency checked. The resulted PLS model explained almost 70% of the variance present in the flux distribution. The medium optimization for the continuous system and for the batch system resulted in higher biomass growth rates than the ones obtained experimentally, 0.034 h-1 and 0.030 h-1, respectively, thus reducing in almost 10 hours the duplication time. Additionally, the optimal medium obtained for the continuous system almost did not consider pyruvate. Overall the proposed methodology seems to be effective and both proposed medium optimizations seem to be promising to reach high cell densities.
Resumo:
In order to address and resolve the wastewater contamination problem of the Sines refinery with the main objective of optimizing the quality of this stream and reducing the costs charged to the refinery, a dynamic mass balance was developed nd implemented for ammonia and polar oil and grease (O&G) contamination in the wastewater circuit. The inadequate routing of sour gas from the sour water stripping unit and the kerosene caustic washing unit, were identified respectively as the major source of ammonia and polar substances present in the industrial wastewater effluent. For the O&G content, a predictive model was developed for the kerosene caustic washing unit, following the Projection to Latent Structures (PLS) approach. Comparison between analytical data for ammonia and polar O&G concentrations in refinery wastewater originating from the Dissolved Air Flotation (DAF) effluent and the model predictions of the dynamic mass balance calculations are in a very good agreement and highlights the dominant impact of the identified streams for the wastewater contamination levels. The ammonia contamination problem was solved by rerouting the sour gas through an existing clogged line with ammonia salts due to a non-insulated line section, while for the O&G a dynamic mass balance was implemented as an online tool, which allows for prevision of possible contamination situations and taking the required preventive actions, and can also serve as a basis for establishing relationships between the O&G contamination in the refinery wastewater with the properties of the refined crude oils and the process operating conditions. The PLS model developed could be of great asset in both optimizing the existing and designing new refinery wastewater treatment units or reuse schemes. In order to find a possible treatment solution for the spent caustic problem, an on-site pilot plant experiments for NaOH recovery from the refinery kerosene caustic washing unit effluent using an alkaline-resistant nanofiltration (NF) polymeric membrane were performed in order to evaluate its applicability for treating these highly alkaline and contaminated streams. For a constant operating pressure and temperature and adequate operating conditions, 99.9% of oil and grease rejection and 97.7% of chemical oxygen demand (COD) rejection were observed. No noticeable membrane fouling or flux decrease were registered until a volume concentration factor of 3. These results allow for NF permeate reuse instead of fresh caustic and for significant reduction of the wastewater contamination, which can result in savings of 1.5 M€ per year at the current prices for the largest Portuguese oil refinery. The capital investments needed for implementation of the required NF membrane system are less than 10% of those associated with the traditional wet air oxidation solution of the spent caustic problem. The operating costs are very similar, but can be less than half if reusing the NF concentrate in refinery pH control applications. The payback period was estimated to be 1.1 years. Overall, the pilot plant experimental results obtained and the process economic evaluation data indicate a very competitive solution through the proposed NF treatment process, which represents a highly promising alternative to conventional and existing spent caustic treatment units.
Resumo:
Polysaccharides are gaining increasing attention as potential environmental friendly and sustainable building blocks in many fields of the (bio)chemical industry. The microbial production of polysaccharides is envisioned as a promising path, since higher biomass growth rates are possible and therefore higher productivities may be achieved compared to vegetable or animal polysaccharides sources. This Ph.D. thesis focuses on the modeling and optimization of a particular microbial polysaccharide, namely the production of extracellular polysaccharides (EPS) by the bacterial strain Enterobacter A47. Enterobacter A47 was found to be a metabolically versatile organism in terms of its adaptability to complex media, notably capable of achieving high growth rates in media containing glycerol byproduct from the biodiesel industry. However, the industrial implementation of this production process is still hampered due to a largely unoptimized process. Kinetic rates from the bioreactor operation are heavily dependent on operational parameters such as temperature, pH, stirring and aeration rate. The increase of culture broth viscosity is a common feature of this culture and has a major impact on the overall performance. This fact complicates the mathematical modeling of the process, limiting the possibility to understand, control and optimize productivity. In order to tackle this difficulty, data-driven mathematical methodologies such as Artificial Neural Networks can be employed to incorporate additional process data to complement the known mathematical description of the fermentation kinetics. In this Ph.D. thesis, we have adopted such an hybrid modeling framework that enabled the incorporation of temperature, pH and viscosity effects on the fermentation kinetics in order to improve the dynamical modeling and optimization of the process. A model-based optimization method was implemented that enabled to design bioreactor optimal control strategies in the sense of EPS productivity maximization. It is also critical to understand EPS synthesis at the level of the bacterial metabolism, since the production of EPS is a tightly regulated process. Methods of pathway analysis provide a means to unravel the fundamental pathways and their controls in bioprocesses. In the present Ph.D. thesis, a novel methodology called Principal Elementary Mode Analysis (PEMA) was developed and implemented that enabled to identify which cellular fluxes are activated under different conditions of temperature and pH. It is shown that differences in these two parameters affect the chemical composition of EPS, hence they are critical for the regulation of the product synthesis. In future studies, the knowledge provided by PEMA could foster the development of metabolically meaningful control strategies that target the EPS sugar content and oder product quality parameters.
Resumo:
Building sector has become an important target for carbon emissions reduction, energy consumption and resources depletion. Due to low rates of replacement of the existing buildings, their low energy performances are a major concern. Most of the current regulations are focused on new buildings and do not account with the several technical, functional and economic constraints that have to be faced in the renovation of existing buildings. Thus, a new methodology is proposed to be used in the decision making process for energy related building renovation, allowing finding a cost-effective balance between energy consumption, carbon emissions and overall added value.
Resumo:
Dissertação de mestrado integrado em Engenharia Mecânica
Resumo:
Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)
Resumo:
Kinetic models have a great potential for metabolic engineering applications. They can be used for testing which genetic and regulatory modifications can increase the production of metabolites of interest, while simultaneously monitoring other key functions of the host organism. This work presents a methodology for increasing productivity in biotechnological processes exploiting dynamic models. It uses multi-objective dynamic optimization to identify the combination of targets (enzymatic modifications) and the degree of up- or down-regulation that must be performed in order to optimize a set of pre-defined performance metrics subject to process constraints. The capabilities of the approach are demonstrated on a realistic and computationally challenging application: a large-scale metabolic model of Chinese Hamster Ovary cells (CHO), which are used for antibody production in a fed-batch process. The proposed methodology manages to provide a sustained and robust growth in CHO cells, increasing productivity while simultaneously increasing biomass production, product titer, and keeping the concentrations of lactate and ammonia at low values. The approach presented here can be used for optimizing metabolic models by finding the best combination of targets and their optimal level of up/down-regulation. Furthermore, it can accommodate additional trade-offs and constraints with great flexibility.
Resumo:
Tese de Doutoramento em Engenharia Civil.
Resumo:
[Excerpt] Bioethanol from lignocellulosic materials (LCM), also called second generation bioethanol, is considered a promising alternative to first generation bioethanol. An efficient production process of lignocellulosic bioethanol involves an effective pretreatment of LCM to improve the accessibility of cellulose and thus enhance the enzymatic saccharification. One interesting approach is to use the whole slurry from treatment, since allows economical and industrial benefits: washing steps are avoided, water consumption is lower and the sugars from liquid phase can be used, increasing ethanol concentration [1]. However, during the pretreatment step some compounds (such as furans, phenolic compounds and weak acids) are produced. These compounds have an inhibitory effect on the microorganisms used for hydrolysate fermentation [2]. To overcome this, the use of a robust industrial strain together with agro-industrial by-products as nutritional supplementation was proposed to increase the ethanol productivities and yields. (...)
Resumo:
The decision support models in intensive care units are developed to support medical staff in their decision making process. However, the optimization of these models is particularly difficult to apply due to dynamic, complex and multidisciplinary nature. Thus, there is a constant research and development of new algorithms capable of extracting knowledge from large volumes of data, in order to obtain better predictive results than the current algorithms. To test the optimization techniques a case study with real data provided by INTCare project was explored. This data is concerning to extubation cases. In this dataset, several models like Evolutionary Fuzzy Rule Learning, Lazy Learning, Decision Trees and many others were analysed in order to detect early extubation. The hydrids Decision Trees Genetic Algorithm, Supervised Classifier System and KNNAdaptive obtained the most accurate rate 93.2%, 93.1%, 92.97% respectively, thus showing their feasibility to work in a real environment.
Resumo:
BACKGROUND: Multiple interventions were made to optimize the medication process in our intensive care unit (ICU). 1 Transcriptions from the medical order form to the administration plan were eliminated by merging both into a single document; 2 the new form was built in a logical sequence and was highly structured to promote completeness and standardization of information; 3 frequently used drug names, approved units, and fixed routes were pre-printed; 4 physicians and nurses were trained with regard to the correct use of the new form. This study was aimed at evaluating the impact of these interventions on clinically significant types of medication errors. METHODS: Eight types of medication errors were measured by a prospective chart review before and after the interventions in the ICU of a public tertiary care hospital. We used an interrupted time-series design to control the secular trends. RESULTS: Over 85 days, 9298 lines of drug prescription and/or administration to 294 patients, corresponding to 754 patient-days were collected and analysed for the three series before and three series following the intervention. Global error rate decreased from 4.95 to 2.14% (-56.8%, P < 0.001). CONCLUSIONS: The safety of the medication process in our ICU was improved by simple and inexpensive interventions. In addition to the optimization of the prescription writing process, the documentation of intravenous preparation, and the scheduling of administration, the elimination of the transcription in combination with the training of users contributed to reducing errors and carried an interesting potential to increase safety.
Resumo:
Integrating and expressing stably a transgene into the cellular genome remain major challenges for gene-based therapies and for bioproduction purposes. While transposon vectors mediate efficient transgene integration, expression may be limited by epigenetic silencing, and persistent transposase expression may mediate multiple transposition cycles. Here, we evaluated the delivery of the piggyBac transposase messenger RNA combined with genetically insulated transposons to isolate the transgene from neighboring regulatory elements and stabilize expression. A comparison of piggyBac transposase expression from messenger RNA and DNA vectors was carried out in terms of expression levels, transposition efficiency, transgene expression and genotoxic effects, in order to calibrate and secure the transposition-based delivery system. Messenger RNA reduced the persistence of the transposase to a narrow window, thus decreasing side effects such as superfluous genomic DNA cleavage. Both the CTF/NF1 and the D4Z4 insulators were found to mediate more efficient expression from a few transposition events. We conclude that the use of engineered piggyBac transposase mRNA and insulated transposons offer promising ways of improving the quality of the integration process and sustaining the expression of transposon vectors.
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.
Resumo:
America’s roadways are in serious need of repair. According to the American Society of Civil Engineers (ASCE), one-third of the nation’s roads are in poor or mediocre condition. ASCE has estimated that under these circumstances American drivers will sacrifice $5.8 billion and as many as 13,800 fatalities a year from 1999 to 2001 ( 1). A large factor in the deterioration of these roads is a result of how well the steel reinforcement transfers loads across the concrete slabs. Fabricating this reinforcement using a shape conducive to transferring these loads will help to aid in minimizing roadway damage. Load transfer within a series of concrete slabs takes place across the joints. For a typical concrete paved road, these joints are approximately 1/8-inch gaps between two adjacent slabs. Dowel bars are located at these joints and used to transfer load from one slab to its adjacent slabs. As long as the dowel bar is completely surrounded by concrete no problems will occur. However, when the hole starts to oblong a void space is created and difficulties can arise. This void space is formed due to a stress concentration where the dowel contacts the concrete. Over time, the repeated process of traffic traveling over the joint crushes the concrete surrounding the dowel bar and causes a void in the concrete. This void inhibits the dowel’s ability to effectively transfer load across the joint. Furthermore, this void gives water and other particles a place to collect that will eventually corrode and potentially bind or lock the joint so that no thermal expansion is allowed. Once there is no longer load transferred across the joint, the load is transferred to the foundation and differential settlement of the adjacent slabs will occur.
Resumo:
America’s roadways are in serious need of repair. According to the American Society of Civil Engineers (ASCE), one-third of the nation’s roads are in poor or mediocre condition (1). ASCE has estimated that under these circumstances American drivers will sacrifice $5.8 billion and as many as 13,800 fatalities a year from 1999 to 2001 ( 1). A large factor in the deterioration of these roads is a result of how well the steel reinforcement transfers loads across the concrete slabs. Fabricating this reinforcement using a shape conducive to transferring these loads will help to aid in minimizing roadway damage. Load transfer within a series of concrete slabs takes place across the joints. For a typical concrete paved road, these joints are approximately 1/8-inch gaps between two adjacent slabs. Dowel bars are located at these joints and used to transfer load from one slab to its adjacent slabs. As long as the dowel bar is completely surrounded by concrete no problems will occur. However, when the hole starts to oblong a void space is created and difficulties can arise. This void space is formed due to a stress concentration where the dowel contacts the concrete. Over time, the repeated process of traffic traveling over the joint crushes the concrete surrounding the dowel bar and causes a void in the concrete. This void inhibits the dowel’s ability to effectively transfer load across the joint. Furthermore, this void gives water and other particles a place to collect that will eventually corrode and potentially bind or lock the joint so that no thermal expansion is allowed. Once there is no longer load transferred across the joint, the load is transferred to the foundation and differential settlement of the adjacent slabs will occur.