917 resultados para Optimization algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kinetic models have a great potential for metabolic engineering applications. They can be used for testing which genetic and regulatory modifications can increase the production of metabolites of interest, while simultaneously monitoring other key functions of the host organism. This work presents a methodology for increasing productivity in biotechnological processes exploiting dynamic models. It uses multi-objective dynamic optimization to identify the combination of targets (enzymatic modifications) and the degree of up- or down-regulation that must be performed in order to optimize a set of pre-defined performance metrics subject to process constraints. The capabilities of the approach are demonstrated on a realistic and computationally challenging application: a large-scale metabolic model of Chinese Hamster Ovary cells (CHO), which are used for antibody production in a fed-batch process. The proposed methodology manages to provide a sustained and robust growth in CHO cells, increasing productivity while simultaneously increasing biomass production, product titer, and keeping the concentrations of lactate and ammonia at low values. The approach presented here can be used for optimizing metabolic models by finding the best combination of targets and their optimal level of up/down-regulation. Furthermore, it can accommodate additional trade-offs and constraints with great flexibility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Civil Engineering

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tese de Doutoramento em Engenharia Civil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[Excerpt] Bioethanol from lignocellulosic materials (LCM), also called second generation bioethanol, is considered a promising alternative to first generation bioethanol. An efficient production process of lignocellulosic bioethanol involves an effective pretreatment of LCM to improve the accessibility of cellulose and thus enhance the enzymatic saccharification. One interesting approach is to use the whole slurry from treatment, since allows economical and industrial benefits: washing steps are avoided, water consumption is lower and the sugars from liquid phase can be used, increasing ethanol concentration [1]. However, during the pretreatment step some compounds (such as furans, phenolic compounds and weak acids) are produced. These compounds have an inhibitory effect on the microorganisms used for hydrolysate fermentation [2]. To overcome this, the use of a robust industrial strain together with agro-industrial by-products as nutritional supplementation was proposed to increase the ethanol productivities and yields. (...)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluorescence in situ hybridization (FISH) is a molecular technique widely used for the detection and characterization of microbial populations. FISH is affected by a wide variety of abiotic and biotic variables and the way they interact with each other. This is translated into a wide variability of FISH procedures found in the literature. The aim of this work is to systematically study the effects of pH, dextran sulfate and probe concentration in the FISH protocol, using a general peptide nucleic acid (PNA) probe for the Eubacteria domain. For this, response surface methodology was used to optimize these 3 PNA-FISH parameters for Gram-negative (Escherichia coli and Pseudomonas fluorescens) and Gram-positive species (Listeria innocua, Staphylococcus epidermidis and Bacillus cereus). The obtained results show that a probe concentration higher than 300 nM is favorable for both groups. Interestingly, a clear distinction between the two groups regarding the optimal pH and dextran sulfate concentration was found: a high pH (approx. 10), combined with lower dextran sulfate concentration (approx. 2% [w/v]) for Gram-negative species and near-neutral pH (approx. 8), together with higher dextran sulfate concentrations (approx. 10% [w/v]) for Gram-positive species. This behavior seems to result from an interplay between pH and dextran sulfate and their ability to influence probe concentration and diffusion towards the rRNA target. This study shows that, for an optimum hybridization protocol, dextran sulfate and pH should be adjusted according to the target bacteria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been reported that growth hormone may benefit selected patients with congestive heart failure. A 63-year-old man with refractory congestive heart failure waiting for heart transplantation, depending on intravenous drugs (dobutamine) and presenting with progressive worsening of the clinical status and cachexia, despite standard treatment, received growth hormone replacement (8 units per day) for optimization of congestive heart failure management. Increase in both serum growth hormone levels (from 0.3 to 0.8 mg/l) and serum IGF-1 levels (from 130 to 300ng/ml) was noted, in association with clinical status improvement, better optimization of heart failure treatment and discontinuation of dobutamine infusion. Left ventricular ejection fraction (by MUGA) increased from 13 % to 18 % and to 28 % later, in association with reduction of pulmonary pressures and increase in exercise capacity (rise in peak VO2 to 13.4 and to 16.2ml/kg/min later). The patient was "de-listed" for heart transplantation. Growth hormone may benefit selected patients with refractory heart failure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an automated optimization framework able to provide network administrators with resilient routing configurations for link-state protocols, such as OSPF or IS-IS. In order to deal with the formulated NP-hard optimization problems, the devised framework is underpinned by the use of computational intelligence optimization engines, such as Multi-objective Evolutionary Algorithms (MOEAs). With the objective of demonstrating the framework capabilities, two illustrative Traffic Engineering methods are described, allowing to attain routing configurations robust to changes in the traffic demands and maintaining the network stable even in the presence of link failure events. The presented illustrative results clearly corroborate the usefulness of the proposed automated framework along with the devised optimization methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To report the hemodynamic and functional responses obtained with clinical optimization guided by hemodynamic parameters in patients with severe and refractory heart failure. METHODS: Invasive hemodynamic monitoring using right heart catheterization aimed to reach low filling pressures and peripheral resistance. Frequent adjustments of intravenous diuretics and vasodilators were performed according to the hemodynamic measurements. RESULTS: We assessed 19 patients (age = 48±12 years and ejection fraction = 21±5%) with severe heart failure. The intravenous use of diuretics and vasodilators reduced by 12 mm Hg (relative reduction of 43%) pulmonary artery occlusion pressure (P<0.001), with a concomitant increment of 6 mL per beat in stroke volume (relative increment of 24%, P<0.001). We observed significant associations between pulmonary artery occlusion pressure and mean pulmonary artery pressure (r=0.76; P<0.001) and central venous pressure (r=0.63; P<0.001). After clinical optimization, improvement in functional class occurred (P< 0.001), with a tendency towards improvement in ejection fraction and no impairment to renal function. CONCLUSION: Optimization guided by hemodynamic parameters in patients with refractory heart failure provides a significant improvement in the hemodynamic profile with concomitant improvement in functional class. This study emphasizes that adjustments in blood volume result in imme-diate benefits for patients with severe heart failure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En nuestro proyecto anterior aproximamos el cálculo de una integral definida con integrandos de grandes variaciones funcionales. Nuestra aproximación paraleliza el algoritmo de cómputo de un método adaptivo de cuadratura, basado en reglas de Newton-Cote. Los primeros resultados obtenidos fueron comunicados en distintos congresos nacionales e internacionales; ellos nos permintieron comenzar con una tipificación de las reglas de cuadratura existentes y una clasificación de algunas funciones utilizadas como funciones de prueba. Estas tareas de clasificación y tipificación no las hemos finalizado, por lo que pretendemos darle continuidad a fin de poder informar sobre la conveniencia o no de utilizar nuestra técnica. Para llevar adelante esta tarea se buscará una base de funciones de prueba y se ampliará el espectro de reglas de cuadraturas a utilizar. Además, nos proponemos re-estructurar el cálculo de algunas rutinas que intervienen en el cómputo de la mínima energía de una molécula. Este programa ya existe en su versión secuencial y está modelizado utilizando la aproximación LCAO. El mismo obtiene resultados exitosos en cuanto a precisión, comparado con otras publicaciones internacionales similares, pero requiere de un tiempo de cálculo significativamente alto. Nuestra propuesta es paralelizar el algoritmo mencionado abordándolo al menos en dos niveles: 1- decidir si conviene distribuir el cálculo de una integral entre varios procesadores o si será mejor distribuir distintas integrales entre diferentes procesadores. Debemos recordar que en los entornos de arquitecturas paralelas basadas en redes (típicamente redes de área local, LAN) el tiempo que ocupa el envío de mensajes entre los procesadores es muy significativo medido en cantidad de operaciones de cálculo que un procesador puede completar. 2- de ser necesario, paralelizar el cálculo de integrales dobles y/o triples. Para el desarrollo de nuestra propuesta se desarrollarán heurísticas para verificar y construir modelos en los casos mencionados tendientes a mejorar las rutinas de cálculo ya conocidas. A la vez que se testearán los algoritmos con casos de prueba. La metodología a utilizar es la habitual en Cálculo Numérico. Con cada propuesta se requiere: a) Implementar un algoritmo de cálculo tratando de lograr versiones superadoras de las ya existentes. b) Realizar los ejercicios de comparación con las rutinas existentes para confirmar o desechar una mejor perfomance numérica. c) Realizar estudios teóricos de error vinculados al método y a la implementación. Se conformó un equipo interdisciplinario integrado por investigadores tanto de Ciencias de la Computación como de Matemática. Metas a alcanzar Se espera obtener una caracterización de las reglas de cuadratura según su efectividad, con funciones de comportamiento oscilatorio y con decaimiento exponencial, y desarrollar implementaciones computacionales adecuadas, optimizadas y basadas en arquitecturas paralelas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2012

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coupled Electromechanical Analysis, MEMS Modeling, MEMS, RF MEMS Switches, Defected Ground Structures, Reconfigurable Resonator

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cross-Flow, Radial Jets Mixing, Temperature Homogenization, Optimization, Combustion Chamber, CFD

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Passive trip system, reactor trip, runaway reaction, batch reactor