953 resultados para Reverse engineering processes
Resumo:
The economic design of a distillation column or distillation sequences is a challenging problem that has been addressed by superstructure approaches. However, these methods have not been widely used because they lead to mixed-integer nonlinear programs that are hard to solve, and require complex initialization procedures. In this article, we propose to address this challenging problem by substituting the distillation columns by Kriging-based surrogate models generated via state of the art distillation models. We study different columns with increasing difficulty, and show that it is possible to get accurate Kriging-based surrogate models. The optimization strategy ensures that convergence to a local optimum is guaranteed for numerical noise-free models. For distillation columns (slightly noisy systems), Karush–Kuhn–Tucker optimality conditions cannot be tested directly on the actual model, but still we can guarantee a local minimum in a trust region of the surrogate model that contains the actual local minimum.
Resumo:
Uno de los aspectos peor valorados por los estudiantes de las nuevas titulaciones de grado es la coordinación entre asignaturas del mismo curso en cuanto a la distribución de controles y otro tipo de pruebas objetivas a lo largo del cuatrimestre, que afecta a la carga de trabajo no presencial en determinados momentos. En la guía docente de cada asignatura aparece la información sobre las pruebas a realizar dentro de un cronograma aproximado por semanas, y está disponible antes del comienzo del curso. Sin embargo, esa distribución puede variar ligeramente una vez empezado el curso debido a diversos motivos, y no se dispone de la información para todas las asignaturas del cuatrimestre en un mismo documento, lo que facilitaría su visualización. En este trabajo se propone el uso de la herramienta Google Calendar con el objetivo de tener un mayor control de este aspecto y poder detectar y corregir conflictos que puedan surgir, aplicándolo al Grado en Ingeniería Química.
Resumo:
The general purpose of the EQUIFASE Conference is to promote the Scientific and Technologic exchange between people from both the academic and the industrial environment in the field of Phase Equilibria and Thermodynamic Properties for the Design of Chemical Processes. Topics: Measurement of Thermodynamic Properties. Phase Equilibria and Chemical Equilibria. Theory and Modelling. Alternative Solvents. Supercritical Fluids. Ionic Liquids. Energy. Gas and oil. Petrochemicals. Environment and sustainability. Biomolecules and Biotechnology. Product and Process Design. Databases and Software. Education.
Resumo:
A single and very easy to use Graphical User Interface (GUI- MATLAB) based on the topological information contained in the Gibbs energy of mixing function has been developed as a friendly tool to check the coherence of NRTL parameters obtained in a correlation data procedure. Thus, the analysis of the GM/RT surface, the GM/RT for the binaries and the GM/RT in planes containing the tie lines should be necessary to validate the obtained parameters for the different models for correlating phase equlibrium data.
Resumo:
This paper presents a new mathematical programming model for the retrofit of heat exchanger networks (HENs), wherein the pressure recovery of process streams is conducted to enhance heat integration. Particularly applied to cryogenic processes, HENs retrofit with combined heat and work integration is mainly aimed at reducing the use of expensive cold services. The proposed multi-stage superstructure allows the increment of the existing heat transfer area, as well as the use of new equipment for both heat exchange and pressure manipulation. The pressure recovery of streams is carried out simultaneously with the HEN design, such that the process conditions (streams pressure and temperature) are variables of optimization. The mathematical model is formulated using generalized disjunctive programming (GDP) and is optimized via mixed-integer nonlinear programming (MINLP), through the minimization of the retrofit total annualized cost, considering the turbine and compressor coupling with a helper motor. Three case studies are performed to assess the accuracy of the developed approach, including a real industrial example related to liquefied natural gas (LNG) production. The results show that the pressure recovery of streams is efficient for energy savings and, consequently, for decreasing the HEN retrofit total cost especially in sub-ambient processes.
Resumo:
A method for quantitative mineralogical analysis by ATR-FTIR has been developed. The method relies on the use of the main band of calcite as a reference for the normalization of the IR spectrum of a mineral sample. In this way, the molar absorptivity coefficient in the Lambert–Beer law and the components of a mixture in mole percentage can be calculated. The GAMS equation modeling environment and the NLP solver CONOPT (©ARKI Consulting and Development) were used to correlate the experimental data in the samples considered. Mixtures of different minerals and gypsum were used in order to measure the minimum band intensity that must be considered for calculations and the detection limit. Accordingly, bands of intensity lower than 0.01 were discarded. The detection limit for gypsum was about 7% (mol/total mole). Good agreement was obtained when this FTIR method was applied to ceramic tiles previously analyzed by X-ray diffraction (XRD) or mineral mixtures prepared in the lab.
Resumo:
In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.
Resumo:
A sequential design method is presented for the design of thermally coupled distillation sequences. The algorithm starts by selecting a set of sequences in the space of basic configurations in which the internal structure of condensers and reboilers is explicitly taken into account and extended with the possibility of including divided wall columns (DWC). This first stage is based on separation tasks (except by the DWCs) and therefore it does not provide an actual sequence of columns. In the second stage the best arrangement in N-1 actual columns is performed taking into account operability and mechanical constraints. Finally, for a set of candidate sequences the algorithm try to reduce the number of total columns by considering Kaibel columns, elimination of transfer blocks or columns with vertical partitions. An example illustrate the different steps of the sequential algorithm.
Resumo:
In this work, we propose a new methodology for the large scale optimization and process integration of complex chemical processes that have been simulated using modular chemical process simulators. Units with significant numerical noise or large CPU times are substituted by surrogate models based on Kriging interpolation. Using a degree of freedom analysis, some of those units can be aggregated into a single unit to reduce the complexity of the resulting model. As a result, we solve a hybrid simulation-optimization model formed by units in the original flowsheet, Kriging models, and explicit equations. We present a case study of the optimization of a sour water stripping plant in which we simultaneously consider economics, heat integration and environmental impact using the ReCiPe indicator, which incorporates the recent advances made in Life Cycle Assessment (LCA). The optimization strategy guarantees the convergence to a local optimum inside the tolerance of the numerical noise.
Resumo:
El Grado en Ingeniería Química de la Universidad de Alicante, como muchas otras titulaciones, se encuentra en proceso de renovación de la acreditación nacional. La preparación para ello ha requerido un gran esfuerzo y coordinación de los profesores de la titulación, y el presente trabajo se ha desarrollado especialmente para afrontar el último tramo previo a la presentación de la documentación. Para ello, se han identificado los puntos fuertes de la titulación y los puntos con posibilidad de mejora en base a los criterios de ANECA, y se han realizado propuestas de mejora de los últimos mediante la coordinación de profesores y alumnos. Entre estas propuestas, se encuentra el uso de la herramienta Google Calendar con el objetivo de tener un mayor control de la carga de trabajo no presencial del alumnado, puesto que este aspecto es uno de los valorados con puntuación más baja en las encuestas realizadas por la Unidad Técnica de Calidad.
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
Effective comprehension of complex software systems requires understanding of both the individual documents that represent software and the complex relationships that exist within and between documents. Relationships of all kinds play a vital role in a software engineer's comprehension of, and navigation within and between, software documents. User-determined relationships have the additional role of enabling the engineer to create and maintain relational documentation that cannot be generated by tools or derived from other relationships. We argue that for a software development environment to effectively support the understanding of complex software systems, relational navigation must be supported at both the document-focused (intra-document) and relation-focused (inter-document) levels. The need for a relation-focused approach is highlighted by an evaluation of an existing document-focused relational interface. We conclude with the requirements for a relation-focused approach to relational navigation. These requirements focus on the user's perspective when interacting with a collection of related documents. We define the requirements for a software development environment that effectively supports the understanding of the software documents and relationships that define a complex software system.
Resumo:
A pesquisa possui como objetivo geral levantar, analisar, quantificar e classificar por níveis de competências quais foram os profissionais recrutados pela Petrobras no período pós-descoberta da camada do pré-sal brasileiro. A pesquisa se justifica pela previsão de crescimento da produção nacional de petróleo e gás natural estimada para os próximos anos o que poderá causar um descompasso entre a oferta e a demanda de mão de obra para o seu desenvolvimento. A abordagem metodológica desenvolvida para realização da pesquisa foi a da pesquisa exploratória, descritiva e documental, através de análise qualitativa e quantitativa longitudinal. Como resultado, a pesquisa revelou que a Petrobras não recruta profissionais para posições de nível gerencial. Os resultados demonstraram ainda que 56,8% das vagas abertas ao recrutamento são destinadas aos profissionais com formação de nível médio e que 76,4porcento das vagas são relacionadas ao processo fabril evidenciando que a Petrobras utiliza como porta de entrada a contratação de profissionais de nível médio com formação técnica. Ao realizar a classificação e qualificação da oferta de vagas abertas ao recrutamento a pesquisa identificou cinco grupos de profissionais distribuídos por três eixos de carreira e quatro níveis salariais que quando categorizados por níveis de competências que foram responsáveis por 69porcento de todas as vagas abertas ao recrutamento. Os dois primeiros grupos em relevância estão relacionados ao eixo de carreira de operações industriais onde o nível superior (O6) e o nível inferior (O1) foram os responsáveis por 22porcento e 21porcento respectivamente do total da oferta de vagas no período. O terceiro grupo em importância diz respeito ao eixo de carreira engenharia, processos e projetos onde os profissionais categorizados com nível médio (E3) numa escala de dois a cinco foram os responsáveis por 13porcento do total de vagas abertas. O quarto e quinto grupos estão relacionados ao eixo de carreira gestão de negócios e categorizados por níveis de competências nos níveis três (G3) e quatro (G4) em uma escala de um a cinco sendo estes responsáveis 7porcento e 6porcento do total de vagas.
Resumo:
Красимир Манев, Нели Манева, Хараламби Хараламбиев - Подходът с използване на бизнес правила (БП) беше въведен в края на миналия век, за да се улесни специфицирането на фирмен софтуер и да може той да задоволи по-добре нуждите на съответния бизнес. Днес повечето от целите на подхода са постигнати. Но усилията, в научно-изследователски и практически аспект, за постигане на „’формална основа за обратно извличане на БП от съществуващи системи “продължават. В статията е представен подход за извличане на БП от програмен код, базиран на методи за статичен анализ на кода. Посочени са някои предимства и недостатъци на такъв подход.
Resumo:
The etiology of central nervous system tumors (CNSTs) is mainly unknown. Aside from extremely rare genetic conditions, such as neurofibromatosis and tuberous sclerosis, the only unequivocally identified risk factor is exposure to ionizing radiation, and this explains only a very small fraction of cases. Using meta-analysis, gene networking and bioinformatics methods, this dissertation explored the hypothesis that environmental exposures produce genetic and epigenetic alterations that may be involved in the etiology of CNSTs. A meta-analysis of epidemiological studies of pesticides and pediatric brain tumors revealed a significantly increased risk of brain tumors among children whose mothers had farm-related exposures during pregnancy. A dose response was recognized when this risk estimate was compared to those for risk of brain tumors from maternal exposure to non-agricultural pesticides during pregnancy, and risk of brain tumors among children exposed to agricultural activities. Through meta-analysis of several microarray studies which compared normal tissue to astrocytomas, we were able to identify a list of 554 genes which were differentially expressed in the majority of astrocytomas. Many of these genes have in fact been implicated in development of astrocytoma, including EGFR, HIF-1α, c-Myc, WNT5A, and IDH3A. Reverse engineering of these 554 genes using Bayesian network analysis produced a gene network for each grade of astrocytoma (Grade I-IV), and ‘key genes’ within each grade were identified. Genes found to be most influential to development of the highest grade of astrocytoma, Glioblastoma multiforme (GBM) were: COL4A1, EGFR, BTF3, MPP2, RAB31, CDK4, CD99, ANXA2, TOP2A, and SERBP1. Lastly, bioinformatics analysis of environmental databases and curated published results on GBM was able to identify numerous potential pathways and geneenvironment interactions that may play key roles in astrocytoma development. Findings from this research have strong potential to advance our understanding of the etiology and susceptibility to CNSTs. Validation of our ‘key genes’ and pathways could potentially lead to useful tools for early detection and novel therapeutic options for these tumors.