932 resultados para heterogeneous polymerization
Stochastic Analysis of Saltwater Intrusion in Heterogeneous Aquifers using Local Average Subdivision
Resumo:
This study investigates the effects of ground heterogeneity, considering permeability as a random variable, on an intruding SW wedge using Monte Carlo simulations. Random permeability fields were generated, using the method of Local Average Subdivision (LAS), based on a lognormal probability density function. The LAS method allows the creation of spatially correlated random fields, generated using coefficients of variation (COV) and horizontal and vertical scales of fluctuation (SOF). The numerical modelling code SUTRA was employed to solve the coupled flow and transport problem. The well-defined 2D dispersive Henry problem was used as the test case for the method. The intruding SW wedge is defined by two key parameters, the toe penetration length (TL) and the width of mixing zone (WMZ). These parameters were compared to the results of a homogeneous case simulated using effective permeability values. The simulation results revealed: (1) an increase in COV resulted in a seaward movement of TL; (2) the WMZ extended with increasing COV; (3) a general increase in horizontal and vertical SOF produced a seaward movement of TL, with the WMZ increasing slightly; (4) as the anisotropic ratio increased the TL intruded further inland and the WMZ reduced in size. The results show that for large values of COV, effective permeability parameters are inadequate at reproducing the effects of heterogeneity on SW intrusion.
Resumo:
A 2D sandbox style experiment was developed to compare the results of numerical modelling to physical testing for saltwater intrusion in homogeneous and heterogeneous aquifers. The sandbox consisted of a thin central viewing chamber filled with glass beads of varying diameters (780μm, 1090μm and 1325μm) under fully saturated conditions. Dyed saltwater (SW) was introduced at the side boundary and a head difference imposed across the porous media. Images of the SW wedge were recorded at intervals in order to assess the suitability of the numerical models predictions of transient SW intrusion. Numerical modelling of the experimental cases were simulated using SUTRA. Two main parameters were chosen to express the condition of the intruding SW wedge at each recorded time step; the toe penetration length (TL) and the width of the mixing zone (WMZ). The WMZ was larger under transient conditions in the heterogeneous case, while the TL was longer for the homogeneous case. The increased variability in the flow field fo the heterogeneous case resulted in increased dispersion, and thus, increased WMZ.
Resumo:
Three issues usually are associated with threat prevention intelligent surveillance systems. First, the fusion and interpretation of large scale incomplete heterogeneous information; second, the demand of effectively predicting suspects’ intention and ranking the potential threats posed by each suspect; third, strategies of allocating limited security resources (e.g., the dispatch of security team) to prevent a suspect’s further actions towards critical assets. However, in the literature, these three issues are seldomly considered together in a sensor network based intelligent surveillance framework. To address
this problem, in this paper, we propose a multi-level decision support framework for in-time reaction in intelligent surveillance. More specifically, based on a multi-criteria event modeling framework, we design a method to predict the most plausible intention of a suspect. Following this, a decision support model is proposed to rank each suspect based on their threat severity and to determine resource allocation strategies. Finally, formal properties are discussed to justify our framework.
Resumo:
The Horiuti-Polanyi mechanism has been considered to be universal for explaining the mechanisms of hydrogenation reactions in heterogeneous catalysis for several decades. In this work, we examine this mechanism for the hydrogenation of acrolein, the simplest alpha,beta-unsaturated aldehyde, in gold-based systems as well as some other metals using extensive first-principles calculations. It is found that a non-Horiuti-Polanyi mechanism is favored in some cases. Furthermore, the physical origin and trend of this mechanism are revealed and discussed regarding the geometrical and electronic effects, which will have a significant influence on current understandings on heterogeneous catalytic hydrogenation reactions and the future catalyst design for these reactions.
Resumo:
The crucial roles of the coverage of surface free sites in determining catalytic activity trend are quantitatively addressed with the help of density functional theory and microkinetics. First, by analyzing activity trends of NO oxidation catalyzed by Ru, Rh, Pd, Os, Ir, and Pt surfaces with full kinetic considerations, we identify that the activity trend is in general determined by the competition between the reaction barrier and the coverage of surface free sites. Second, since the dissociation of many important molecules, such as the dissociation of N(2), O(2), and CO, follows the same Bronsted-Evans-Polanyi relationship, the coverage of surface free sites is usually a decisive term that affects the overall activity. Third, an equation is derived for the coverage of surface free sites and it is found that the coverage of surface free sites contains not only all the key thermodynamic parameters but also all the kinetic properties in the catalytic system. (C) 2009 American Institute of Physics. [DOI: 10.1063/1.3140202]
Resumo:
Heterogeneous catalysis is of great importance both industrially and academically. Rational design of heterogeneous catalysts is highly desirable, and the computational screening and design method is one of the most promising approaches for rational design of heterogeneous catalysts. Herein, we review some attempts towards the rational catalyst design using density functional theory from our group. Some general relationships and theories on the activity and selectivity are covered, such as the Brønsted–Evans–Polanyi relation, volcano curves/surfaces, chemical potentials, optimal adsorption energy window and energy descriptor of selectivity. Furthermore, the relations of these relationships and theories to the rational design are discussed, and some examples of computational screening and design method are given.
Resumo:
Wearable devices performing advanced bio-signal analysis algorithms are aimed to foster a revolution in healthcare provision of chronic cardiac diseases. In this context, energy efficiency is of paramount importance, as long-term monitoring must be ensured while relying on a tiny power source. Operating at a scaled supply voltage, just above the threshold voltage, effectively helps in saving substantial energy, but it makes circuits, and especially memories, more prone to errors, threatening the correct execution of algorithms. The use of error detection and correction codes may help to protect the entire memory content, however it incurs in large area and energy overheads which may not be compatible with the tight energy budgets of wearable systems. To cope with this challenge, in this paper we propose to limit the overhead of traditional schemes by selectively detecting and correcting errors only in data highly impacting the end-to-end quality of service of ultra-low power wearable electrocardiogram (ECG) devices. This partition adopts the protection of either significant words or significant bits of each data element, according to the application characteristics (statistical properties of the data in the application buffers), and its impact in determining the output. The proposed heterogeneous error protection scheme in real ECG signals allows substantial energy savings (11% in wearable devices) compared to state-of-the-art approaches, like ECC, in which the whole memory is protected against errors. At the same time, it also results in negligible output quality degradation in the evaluated power spectrum analysis application of ECG signals.
Resumo:
Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application.
Resumo:
Exascale computation is the next target of high performance computing. In the push to create exascale computing platforms, simply increasing the number of hardware devices is not an acceptable option given the limitations of power consumption, heat dissipation, and programming models which are designed for current hardware platforms. Instead, new hardware technologies, coupled with improved programming abstractions and more autonomous runtime systems, are required to achieve this goal. This position paper presents the design of a new runtime for a new heterogeneous hardware platform being developed to explore energy efficient, high performance computing. By combining a number of different technologies, this framework will both simplify the programming of current and future HPC applications, as well as automating the scheduling of data and computation across this new hardware platform. In particular, this work explores the use of FPGAs to achieve both the power and performance goals of exascale, as well as utilising the runtime to automatically effect dynamic configuration and reconfiguration of these platforms.
Resumo:
Power capping is a fundamental method for reducing the energy consumption of a wide range of modern computing environments, ranging from mobile embedded systems to datacentres. Unfortunately, maximising performance and system efficiency under static power caps remains challenging, while maximising performance under dynamic power caps has been largely unexplored. We present an adaptive power capping method that reduces the power consumption and maximizes the performance of heterogeneous SoCs for mobile and server platforms. Our technique combines power capping with coordinated DVFS, data partitioning and core allocations on a heterogeneous SoC with ARM processors and FPGA resources. We design our framework as a run-time system based on OpenMP and OpenCL to utilise the heterogeneous resources. We evaluate it through five data-parallel benchmarks on the Xilinx SoC which allows fully voltage and frequency control. Our experiments show a significant performance boost of 30% under dynamic power caps with concurrent execution on ARM and FPGA, compared to a naive separate approach.