855 resultados para Heterogeneous azeotropic distillation
Resumo:
LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007
Resumo:
SOUZA, Rodrigo B. ; MEDEIROS, Adelardo A. D. ; NASCIMENTO, João Maria A. ; GOMES, Heitor P. ; MAITELLI, André L. A Proposal to the Supervision of Processes in an Industrial Environment with Heterogeneous Systems. In: INTERNATIONAL CONFERENCE OF THE IEEEOF THE INDUSTRUI ELECTRONICS SOCIETY,32., Paris, 2006, Paris. Anais... Paris: IECON, 2006
Resumo:
Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.
Resumo:
We build a system to support search and visualization on heterogeneous information networks. We first build our system on a specialized heterogeneous information network: DBLP. The system aims to facilitate people, especially computer science researchers, toward a better understanding and user experience about academic information networks. Then we extend our system to the Web. Our results are much more intuitive and knowledgeable than the simple top-k blue links from traditional search engines, and bring more meaningful structural results with correlated entities. We also investigate the ranking algorithm, and we show that the personalized PageRank and proposed Hetero-personalized PageRank outperform the TF-IDF ranking or mixture of TF-IDF and authority ranking. Our work opens several directions for future research.
Resumo:
Heterogeneous computing systems have become common in modern processor architectures. These systems, such as those released by AMD, Intel, and Nvidia, include both CPU and GPU cores on a single die available with reduced communication overhead compared to their discrete predecessors. Currently, discrete CPU/GPU systems are limited, requiring larger, regular, highly-parallel workloads to overcome the communication costs of the system. Without the traditional communication delay assumed between GPUs and CPUs, we believe non-traditional workloads could be targeted for GPU execution. Specifically, this thesis focuses on the execution model of nested parallel workloads on heterogeneous systems. We have designed a simulation flow which utilizes widely used CPU and GPU simulators to model heterogeneous computing architectures. We then applied this simulator to non-traditional GPU workloads using different execution models. We also have proposed a new execution model for nested parallelism allowing users to exploit these heterogeneous systems to reduce execution time.
Resumo:
Topography is often thought as exclusively linked to mountain ranges formed by plates collision. It is now, however, known that apart from compression, uplift and denudation of rocks may be triggered by rifting, like it happens at elevated passive margins, and away from plate boundaries by both intra-plate stress causing reactivation of older structures, and by epeirogenic movements driven by mantle dynamics and initiating long-wavelength uplift. In the Cenozoic, central west Britain and other parts of the North Atlantic margins experienced multiple episodes of rock uplift and denudation that have been variable both at spatial and temporal scales. The origin of topography in central west Britain is enigmatic, and because of its location, it may be related to any of the processes mentioned above. In this study, three low temperature thermochronometers, the apatite fission track (AFT) and apatite and zircon (U-Th-Sm)/He (AHe and ZHe, respectively) methods were used to establish the rock cooling history from 200◦C to 30◦C. The samples were collected from the intrusive rocks in the high elevation, high relief regions of the Lake District (NW England), southern Scotland and northern Wales. AFT ages from the region are youngest (55–70 Ma) in the Lake District and increase northwards into southern Scotland and southwards in north Wales (>200 Ma). AHe and ZHe ages show no systematic pattern; the former range from 50 to 80 Ma and the latter tend to record the post-emplacement cooling of the intrusions (200–400 Ma). The complex, multi-thermochronometric inverse modelling suggests a ubiquitous, rapid Late Cretaceous/early Palaeogene cooling event that is particularly marked in Lake District and Criffell. The timing and rate of cooling in southern Scotland and in northern Wales is poorly resolved as the amount of cooling was less than 60◦C. The Lake District plutons were at >110◦C prior to the early Palaeogene; cooling due to a combined effect of high heat flow, from the heat producing granite batholith, and the blanketing effect of the overlying low conductivity Late Mesozoic limestones and mudstones. Modelling of the heat transfer suggests that this combination produced an elevated geothermal gradient within the sedimentary rocks (50–70◦C/km) that was about two times higher than at the present day. Inverse modelling of the AFT and AHe data taking the crustal structure into consideration suggests that denudation was the highest, 2.0–2.5 km, in the coastal areas of the Lake District and southern Scotland, gradually decreasing to less than 1 km in the northern Southern Uplands and northern Wales. Both the rift-related uplift and the intra-plate compression poorly correlate with the timing, location and spatial distribution of the early Palaeogene denudation. The pattern of early Palaeogene denudation correlates with the thickness of magmatic underplating, if the changes of mean topography, Late Cretaceous water depth and eroded rock density are taken into consideration. However, the uplift due to underplating alone cannot fully justify the total early Palaeogene denudation. The amount that is not ex- plained by underplating is, however, roughly spatially constant across the study area and can be referred to the transient thermal uplift induced by the mantle plume arrival. No other mechanisms are required to explain the observed pattern of denudation. The onset of denudation across the region is not uniform. Denudation started at 70–75 Ma in the central part of the Lake District whereas the coastal areas the rapid erosion appears to have initiated later (65–60 Ma). This is ~10 Ma earlier than the first vol- canic manifestation of the proto-Iceland plume and favours the hypothesis of the short period of plume incubation below the lithosphere before the volcanism. In most of the localities, the rocks had cooled to temperatures lower than 30◦C by the end of the Palaeogene, suggesting that the total Neogene denudation was, at a maximum, several hundreds of metres. Rapid cooling in the last 3 million years is resolved in some places in southern Scotland, where it could be explained by glacial erosion and post-glacial isostatic uplift.
Resumo:
É do conhecimento geral de que, hoje em dia, a tecnologia evolui rapidamente. São criadas novas arquitecturas para resolver determinadas limitações ou problemas. Por vezes, essa evolução é pacífica e não requer necessidade de adaptação e, por outras, essa evolução pode Implicar mudanças. As linguagens de programação são, desde sempre, o principal elo de comunicação entre o programador e o computador. Novas linguagens continuam a aparecer e outras estão sempre em desenvolvimento para se adaptarem a novos conceitos e paradigmas. Isto requer um esforço extra para o programador, que tem de estar sempre atento a estas mudanças. A Programação Visual pode ser uma solução para este problema. Exprimir funções como módulos que recebem determinado Input e retomam determinado output poderá ajudar os programadores espalhados pelo mundo, através da possibilidade de lhes dar uma margem para se abstraírem de pormenores de baixo nível relacionados com uma arquitectura específica. Esta tese não só mostra como combinar as capacidades do CeII/B.E. (que tem uma arquitectura multiprocessador heterogénea) com o OpenDX (que tem um ambiente de programação visual), como também demonstra que tal pode ser feito sem grande perda de performance. ABSTRACT; lt is known that nowadays technology develops really fast. New architectures are created ln order to provide new solutions for different technology limitations and problems. Sometimes, this evolution is pacific and there is no need to adapt to new technologies, but things also may require a change every once ln a while. Programming languages have always been the communication bridge between the programmer and the computer. New ones keep coming and other ones keep improving ln order to adapt to new concepts and paradigms. This requires an extra-effort for the programmer, who always needs to be aware of these changes. Visual Programming may be a solution to this problem. Expressing functions as module boxes which receive determined Input and return determined output may help programmers across the world by giving them the possibility to abstract from specific low-level hardware issues. This thesis not only shows how the CeII/B.E. (which has a heterogeneous multi-core architecture) capabilities can be combined with OpenDX (which has a visual programming environment), but also demonstrates that lt can be done without losing much performance.
Resumo:
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.
Resumo:
The growing concern about the depletion of oil has spurred worldwide interest in finding alternative feedstocks for important petrochemical commodities and fuels. On the one hand, the enormous re-serves found (208 trillion cubic feet proven1), environmental sustainability and lower overall costs point to natural gas as the primary source for energy and chemicals in the near future.2 Nowadays the transformation of methane into useful chemicals and liquid fuels is only feasible via synthesis gas, a mixture of molecular hydrogen and carbon monoxide, that is further transformed to methanol or to hydrocarbons under moderate reaction conditions (150-350 °C and 10-100 bar).3 For a major cost reduction and in order to valorize small natural gas sources, either more efficient "syngas to products" catalysts should be produced or the manner in which methane is initially activated should be changed, ideally by developing catalysts able to directly oxidize methane to interesting products such as methanol. On the other hand, from the point of view of CO2 emissions, the use of the re-maining fossil resources will further contribute to global warming. In this scenario, the development of efficient routes for the transformation of CO2 into useful chemicals and fuels would represent a considerable step forward towards sustainability. Indeed, the environmental and economic incen-tives to develop processes for the conversion of CO2 into fuels and chemicals are great. However, for such conversions to become economically feasible, considerable research is necessary. In this lecture we will summarize our recent efforts into the design of new catalytic systems, based on MOFs and COFs, to address these challenges. Examples include the development of new Fe based FTS catalysts, electrocatalysts for the selective conversion of CO2 into syngas, the development of efficient catalysts for the utilization of formic acid as hydrogen storage vector and the development of new enzyme inspired systems for the direct transformation of methane to methanol under mild reaction conditions. References (1) http://www.clearonmoney.com/dw/doku.php?id=public:natural_gas_reserves. (2) Derouane, E. G.; Parmon, V.; Lemos, F.; Ribeiro, F. R. Sustainable Strategies for the Up-grading of Natural Gas: Fundamentals, Challenges, and Opportunities; Springer, 2005. (3) Rofer-DePoorter, C. K. Chemical Reviews. ACS Publications 1981, pp 447–474.
Resumo:
Hypothesis: The possibility of tailoring the final properties of environmentally friendly waterborne polyurethane and polyurethane-urea dispersions and the films they produce makes them attractive for a wide range of applications. Both the reagents content and the synthesis route contribute to the observed final properties. Experiments: A series of polyurethane-urea and polyurethane aqueous dispersions were synthesized using 1,2-ethanediamine and/or 1,4-butanediol as chain extenders. The diamine content was varied from 0 to 4.5 wt%. Its addition was carried out either by the classical heterogeneous reaction medium (after phase inversion step), or else by the alternative homogeneous medium (prior to dispersion formation). Dispersions as well as films prepared from dispersions have been later extensively characterized. Findings: 1,2-Ethanediamine addition in heterogeneous medium leads to dispersions with high particle sizes and broad distributions whereas in homogeneous medium, lower particle sizes and narrow distributions were observed, thus leading to higher uniformity and cohesiveness among particles during film formation. Thereby, stress transfer is favored adding the diamine in a homogeneous medium; and thus the obtained films presented quite higher stress and modulus values. Furthermore, the higher uniformity of films tends to hinder water molecules transport through the film, resulting, in general, in a lower water absorption capacity.
Resumo:
This paper is concerned with the physical construction of the University of Aveiro Santiago Campus, based on the direct relation of the construction of its facilities to the master plan conceived in the Centre of Studies of the University of Oporto Faculty of Architecture by a team coordinated by Nuno Portas (CEFA: The Revision of the Masterplan of the University of Aveiro, 1987/89).
Resumo:
Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.
Resumo:
The diversity in the way cloud providers o↵er their services, give their SLAs, present their QoS, or support di↵erent technologies, makes very difficult the portability and interoperability of cloud applications, and favours the well-known vendor lock-in problem. We propose a model to describe cloud applications and the required resources in an agnostic, and providers- and resources-independent way, in which individual application modules, and entire applications, may be re-deployed using different services without modification. To support this model, and after the proposal of a variety of cross-cloud application management tools by different authors, we propose going one step further in the unification of cloud services with a management approach in which IaaS and PaaS services are integrated into a unified interface. We provide support for deploying applications whose components are distributed on different cloud providers, indistinctly using IaaS and PaaS services.