7 resultados para multi-column process

em Universidad de Alicante


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tuning compilations is the process of adjusting the values of a compiler options to improve some features of the final application. In this paper, a strategy based on the use of a genetic algorithm and a multi-objective scheme is proposed to deal with this task. Unlike previous works, we try to take advantage of the knowledge of this domain to provide a problem-specific genetic operation that improves both the speed of convergence and the quality of the results. The evaluation of the strategy is carried out by means of a case of study aimed to improve the performance of the well-known web server Apache. Experimental results show that a 7.5% of overall improvement can be achieved. Furthermore, the adaptive approach has shown an ability to markedly speed-up the convergence of the original strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a derivative-free optimization algorithm coupled with a chemical process simulator for the optimal design of individual and complex distillation processes using a rigorous tray-by-tray model. The proposed approach serves as an alternative tool to the various models based on nonlinear programming (NLP) or mixed-integer nonlinear programming (MINLP) . This is accomplished by combining the advantages of using a commercial process simulator (Aspen Hysys), including especially suited numerical methods developed for the convergence of distillation columns, with the benefits of the particle swarm optimization (PSO) metaheuristic algorithm, which does not require gradient information and has the ability to escape from local optima. Our method inherits the superstructure developed in Yeomans, H.; Grossmann, I. E.Optimal design of complex distillation columns using rigorous tray-by-tray disjunctive programming models. Ind. Eng. Chem. Res.2000, 39 (11), 4326–4335, in which the nonexisting trays are considered as simple bypasses of liquid and vapor flows. The implemented tool provides the optimal configuration of distillation column systems, which includes continuous and discrete variables, through the minimization of the total annual cost (TAC). The robustness and flexibility of the method is proven through the successful design and synthesis of three distillation systems of increasing complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new mathematical model for the simultaneous synthesis of heat exchanger networks (HENs), wherein the handling pressure of process streams is used to enhance the heat integration. The proposed approach combines generalized disjunctive programming (GDP) and mixed-integer nonlinear programming (MINLP) formulation, in order to minimize the total annualized cost composed by operational and capital expenses. A multi-stage superstructure is developed for the HEN synthesis, assuming constant heat capacity flow rates and isothermal mixing, and allowing for streams splits. In this model, the pressure and temperature of streams must be treated as optimization variables, increasing further the complexity and difficulty to solve the problem. In addition, the model allows for coupling of compressors and turbines to save energy. A case study is performed to verify the accuracy of the proposed model. In this example, the optimal integration between the heat and work decreases the need for thermal utilities in the HEN design. As a result, the total annualized cost is also reduced due to the decrease in the operational expenses related to the heating and cooling of the streams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new mathematical programming model for the retrofit of heat exchanger networks (HENs), wherein the pressure recovery of process streams is conducted to enhance heat integration. Particularly applied to cryogenic processes, HENs retrofit with combined heat and work integration is mainly aimed at reducing the use of expensive cold services. The proposed multi-stage superstructure allows the increment of the existing heat transfer area, as well as the use of new equipment for both heat exchange and pressure manipulation. The pressure recovery of streams is carried out simultaneously with the HEN design, such that the process conditions (streams pressure and temperature) are variables of optimization. The mathematical model is formulated using generalized disjunctive programming (GDP) and is optimized via mixed-integer nonlinear programming (MINLP), through the minimization of the retrofit total annualized cost, considering the turbine and compressor coupling with a helper motor. Three case studies are performed to assess the accuracy of the developed approach, including a real industrial example related to liquefied natural gas (LNG) production. The results show that the pressure recovery of streams is efficient for energy savings and, consequently, for decreasing the HEN retrofit total cost especially in sub-ambient processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The change in the carbonaceous skeleton of nanoporous carbons during their activation has received limited attention, unlike its counterpart process in the presence of an inert atmosphere. Here we adopt a multi-method approach to elucidate this change in a poly(furfuryl alcohol)-derived carbon activated using cyclic application of oxygen saturation at 250 °C before its removal (with carbon) at 800 °C in argon. The methods used include helium pycnometry, synchrotron-based X-ray diffraction (XRD) and associated radial distribution function (RDF) analysis, transmission electron microscopy (TEM) and, uniquely, electron energy-loss spectroscopy spectrum-imaging (EELS-SI), electron nanodiffraction and fluctuation electron microscopy (FEM). Helium pycnometry indicates the solid skeleton of the carbon densifies during activation from 78% to 93% of graphite. RDF analysis, EELS-SI, and FEM all suggest this densification comes through an in-plane growth of sp2 carbon out to the medium range without commensurate increase in order normal to the plane. This process could be termed ‘graphenization’. The exact way in which this process occurs is not clear, but TEM images of the carbon before and after activation suggest it may come through removal of the more reactive carbon, breaking constraining cross-links and creating space that allows the remaining carbon material to migrate in an annealing-like process.