922 resultados para multi-column process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Devido às tendências de crescimento da quantidade de dados processados e a crescente necessidade por computação de alto desempenho, mudanças significativas estão acontecendo no projeto de arquiteturas de computadores. Com isso, tem-se migrado do paradigma sequencial para o paralelo, com centenas ou milhares de núcleos de processamento em um mesmo chip. Dentro desse contexto, o gerenciamento de energia torna-se cada vez mais importante, principalmente em sistemas embarcados, que geralmente são alimentados por baterias. De acordo com a Lei de Moore, o desempenho de um processador dobra a cada 18 meses, porém a capacidade das baterias dobra somente a cada 10 anos. Esta situação provoca uma enorme lacuna, que pode ser amenizada com a utilização de arquiteturas multi-cores heterogêneas. Um desafio fundamental que permanece em aberto para estas arquiteturas é realizar a integração entre desenvolvimento de código embarcado, escalonamento e hardware para gerenciamento de energia. O objetivo geral deste trabalho de doutorado é investigar técnicas para otimização da relação desempenho/consumo de energia em arquiteturas multi-cores heterogêneas single-ISA implementadas em FPGA. Nesse sentido, buscou-se por soluções que obtivessem o melhor desempenho possível a um consumo de energia ótimo. Isto foi feito por meio da combinação de mineração de dados para a análise de softwares baseados em threads aliadas às técnicas tradicionais para gerenciamento de energia, como way-shutdown dinâmico, e uma nova política de escalonamento heterogeneity-aware. Como principais contribuições pode-se citar a combinação de técnicas de gerenciamento de energia em diversos níveis como o nível do hardware, do escalonamento e da compilação; e uma política de escalonamento integrada com uma arquitetura multi-core heterogênea em relação ao tamanho da memória cache L1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tuning compilations is the process of adjusting the values of a compiler options to improve some features of the final application. In this paper, a strategy based on the use of a genetic algorithm and a multi-objective scheme is proposed to deal with this task. Unlike previous works, we try to take advantage of the knowledge of this domain to provide a problem-specific genetic operation that improves both the speed of convergence and the quality of the results. The evaluation of the strategy is carried out by means of a case of study aimed to improve the performance of the well-known web server Apache. Experimental results show that a 7.5% of overall improvement can be achieved. Furthermore, the adaptive approach has shown an ability to markedly speed-up the convergence of the original strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a derivative-free optimization algorithm coupled with a chemical process simulator for the optimal design of individual and complex distillation processes using a rigorous tray-by-tray model. The proposed approach serves as an alternative tool to the various models based on nonlinear programming (NLP) or mixed-integer nonlinear programming (MINLP) . This is accomplished by combining the advantages of using a commercial process simulator (Aspen Hysys), including especially suited numerical methods developed for the convergence of distillation columns, with the benefits of the particle swarm optimization (PSO) metaheuristic algorithm, which does not require gradient information and has the ability to escape from local optima. Our method inherits the superstructure developed in Yeomans, H.; Grossmann, I. E.Optimal design of complex distillation columns using rigorous tray-by-tray disjunctive programming models. Ind. Eng. Chem. Res.2000, 39 (11), 4326–4335, in which the nonexisting trays are considered as simple bypasses of liquid and vapor flows. The implemented tool provides the optimal configuration of distillation column systems, which includes continuous and discrete variables, through the minimization of the total annual cost (TAC). The robustness and flexibility of the method is proven through the successful design and synthesis of three distillation systems of increasing complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new mathematical model for the simultaneous synthesis of heat exchanger networks (HENs), wherein the handling pressure of process streams is used to enhance the heat integration. The proposed approach combines generalized disjunctive programming (GDP) and mixed-integer nonlinear programming (MINLP) formulation, in order to minimize the total annualized cost composed by operational and capital expenses. A multi-stage superstructure is developed for the HEN synthesis, assuming constant heat capacity flow rates and isothermal mixing, and allowing for streams splits. In this model, the pressure and temperature of streams must be treated as optimization variables, increasing further the complexity and difficulty to solve the problem. In addition, the model allows for coupling of compressors and turbines to save energy. A case study is performed to verify the accuracy of the proposed model. In this example, the optimal integration between the heat and work decreases the need for thermal utilities in the HEN design. As a result, the total annualized cost is also reduced due to the decrease in the operational expenses related to the heating and cooling of the streams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new mathematical programming model for the retrofit of heat exchanger networks (HENs), wherein the pressure recovery of process streams is conducted to enhance heat integration. Particularly applied to cryogenic processes, HENs retrofit with combined heat and work integration is mainly aimed at reducing the use of expensive cold services. The proposed multi-stage superstructure allows the increment of the existing heat transfer area, as well as the use of new equipment for both heat exchange and pressure manipulation. The pressure recovery of streams is carried out simultaneously with the HEN design, such that the process conditions (streams pressure and temperature) are variables of optimization. The mathematical model is formulated using generalized disjunctive programming (GDP) and is optimized via mixed-integer nonlinear programming (MINLP), through the minimization of the retrofit total annualized cost, considering the turbine and compressor coupling with a helper motor. Three case studies are performed to assess the accuracy of the developed approach, including a real industrial example related to liquefied natural gas (LNG) production. The results show that the pressure recovery of streams is efficient for energy savings and, consequently, for decreasing the HEN retrofit total cost especially in sub-ambient processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Allegory is not obsolete as Samuel Coleridge and Johann Wolfgang von Goethe have claimed. It is alive and well and has transformed from a restrictive concept to a concept that is flexible and can form to meet the needs of the author or reader. The most efficient way to evidence this is by making a case study of it with a suitable work that will allow us to perceive its plasticity. This essay uses J.R.R. Tolkien’s The Lord of the Rings as a multi-perspective case study of the concept of allegory; the size and complexity of the narrative make it a suitable choice. My aim is to illustrate the plasticity of allegory as a concept and illuminate some of the possibilities and pitfalls of allegory and allegoresis. As to whether The Lord of the Rings can be treated as an allegory, it will be examined from three different perspectives: as a purely writerly process, a middle ground of writer and reader and as a purely readerly process. The Lord of the Rings will then be compared to a series of concepts of allegorical theory such as Plato’s classical “The Ring of Gyges”, William Langland’s classic The Vision of William Concerning Piers the Plowman and contemporary allegories of racism and homoeroticism to demonstrate just how adaptable this concept is. The position of this essay is that the concept of allegory has changed over time since its conception and become more malleable. This poses certain dangers as allegory has become an all-round tool for anyone to do anything that has few limitations and has lost its early rigid form and now favours an almost anything goes approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study aimed to examine the factors influencing referral to rehabilitation following traumatic brain injury (TBI) by using social problems theory as a conceptual model to focus on practitioners and the process of decision-making in two Australian hospitals. The research design involved semi-structured interviews with 18 practitioners and observations of 10 team meetings, and was part of a larger study on factors influencing referral to rehabilitation in the same settings. Analysis revealed that referral decisions were influenced primarily by practitioners' selection and their interpretation of clinical and non-clinical patient factors. Further, practitioners generally considered patient factors concurrently during an ongoing process of decision-making, with the combinations and interactions of these factors forming the basis for interpretations of problems and referral justifications. Key patient factors considered in referral decisions included functional and tracheostomy status, time since injury, age, family, place of residence and Indigenous status. However, rate and extent of progress, recovery potential, safety and burden of care, potential for independence and capacity to cope were five interpretative themes, which emerged as the justifications for referral decisions. The subsequent negotiation of referral based on patient factors was in turn shaped by the involvement of practitioners. While multi-disciplinary processes of decision-making were the norm, allied health professionals occupied a central role in referral to rehabilitation, and involvement of medical, nursing and allied health practitioners varied. Finally, the organizational pressures and resource constraints, combined with practitioners' assimilation of the broader efficiency agenda were central factors shaping referral. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Column-based refolding of complex and highly disulfide-bonded proteins simplifies protein renaturation at both preparative and process scale by integrating and automating a number of operations commonly used in dilution refolding. Bovine serum albumin (BSA) was used as a model protein for refolding and oxido-shuffling on an ion-exchange column to give a refolding yield of 55 % after 40 Ih incubation. Successful on-column refolding was conducted at protein concentrations of up to 10 mg/ml and refolded protein, purified from misfolded forms, was eluted directly from the column at a concentration of 3 mg/ml. This technique integrates the dithiothreitol removal, refolding, concentration and purification steps, achieving a high level of process simplification and automation, and a significant saving in reagent costs when scaled. Importantly, the current result suggests that it is possible to controllably refold disulfide-bonded proteins using common and inexpensive matrices, and that it is not always necessary to control protein-surface interactions using affinity tags and expensive chromatographic matrices. Moreover, it is possible to strictly control the oxidative refolding environment once denatured protein is bound to the ion-exchange column, thus allowing precisely controlled oxido-shuffling. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oil shale processing produces an aqueous wastewater stream known as retort water. The fate of the organic content of retort water from the Stuart oil shale project (Gladstone, Queensland) is examined in a proposed packed bed treatment system consisting of a 1:1 mixture of residual shale from the retorting process and mining overburden. The retort water had a neutral pH and an average unfiltered TOC of 2,900 mg l(-1). The inorganic composition of the retort water was dominated by NH4+. Only 40% of the total organic carbon (TOC) in the retort water was identifiable, and this was dominated by carboxylic acids. In addition to monitoring influent and effluent TOC concentrations, CO2 evolution was monitored on line by continuous measurements of headspace concentrations and air flow rates. The column was run for 64 days before it blocked and was dismantled for analysis. Over 98% of the TOC was removed from the retort water. Respirometry measurements were confounded by CO2 production from inorganic sources. Based on predictions with the chemical equilibrium package PHREEQE, approximately 15% of the total CO2 production arose from the reaction of NH4+ with carbonates. The balance of the CO2 production accounted for at least 80% of the carbon removed from the retort water. Direct measurements of solid organic carbon showed that approximately 20% of the influent carbon was held-up in the top 20cm of the column. Less than 20% of this held-up carbon was present as either biomass or as adsorbed species. Therefore, the column was ultimately blocked by either extracellular polymeric substances or by a sludge that had precipitated out of the retort water.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method for determining the globally optimal on-line learning rule for a soft committee machine under a statistical mechanics framework. This rule maximizes the total reduction in generalization error over the whole learning process. A simple example demonstrates that the locally optimal rule, which maximizes the rate of decrease in generalization error, may perform poorly in comparison.