174 resultados para Heterogeneous Regressions Algorithms
Resumo:
An attempt is made to immobilize the homogeneous metal chloride/EMIMCl catalyst for glucose dehydration to 5-hydroxymethylfurfural. To this end, ionic liquid fragments were grafted to the surface of SBA-15 to generate a heterogenized mimick of the homogeneous reaction medium. Despite a decrease in the surface area, the ordered mesoporous structure of SBA-15 was largely retained. Metal chlorides dispersed in such ionic liquid film are able to convert glucose to HMF with much higher yields as is possible in the aqueous phase. The reactivity order CrCl > AlCl > CuCl > FeCl is similar to the order in the ionic liquid solvent, yet the selectivity are lower. The HMF yield of the most promising CrCl-Im-SBA-15 can be improved by using a HO:DMSO mixture as the reaction medium and a 2-butanol/MIBK extraction layer. Different attempts to decrease metal chloride leaching by using different solvents are described. © 2013 American Institute of Chemical Engineers Environ Prog.
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.
Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.
Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.
Resumo:
Many modern networks are \emph{reconfigurable}, in the sense that the topology of the network can be changed by the nodes in the network. For example, peer-to-peer, wireless and ad-hoc networks are reconfigurable. More generally, many social networks, such as a company's organizational chart; infrastructure networks, such as an airline's transportation network; and biological networks, such as the human brain, are also reconfigurable. Modern reconfigurable networks have a complexity unprecedented in the history of engineering, resembling more a dynamic and evolving living animal rather than a structure of steel designed from a blueprint. Unfortunately, our mathematical and algorithmic tools have not yet developed enough to handle this complexity and fully exploit the flexibility of these networks. We believe that it is no longer possible to build networks that are scalable and never have node failures. Instead, these networks should be able to admit small, and maybe, periodic failures and still recover like skin heals from a cut. This process, where the network can recover itself by maintaining key invariants in response to attack by a powerful adversary is what we call \emph{self-healing}. Here, we present several fast and provably good distributed algorithms for self-healing in reconfigurable dynamic networks. Each of these algorithms have different properties, a different set of gaurantees and limitations. We also discuss future directions and theoretical questions we would like to answer. %in the final dissertation that this document is proposed to lead to.
Resumo:
The fundamental understanding of the activity in heterogeneous catalysis has long been the major subject in chemistry. This paper shows the development of a two-step model to understand this activity. Using the theory of chemical potential kinetics with Bronsted-Evans-Polanyi relations, the general adsorption energy window is determined from volcano curves, using which the best catalysts can be searched. Significant insights into the reasons for catalytic activity are obtained.
Resumo:
Mechanochemical synthesis has the potential to provide more sustainable preparative routes to catalysts than the current multistep solvent-based routes. In this review, the mechanochemical synthesis of catalysts is discussed, with emphasis placed on catalysts for environmental, energy and chemical synthesis applications. This includes the formation of mixed-metal oxides as well as the process of dispersing metals onto solid supports. In most cases the process involves no solvent. Encouragingly, there are several examples where the process is advantageous compared with the more normal solvent-based methods. This can be because of process cost or simplicity, or, notably, where it provides more active/selective catalysts than those made by conventional wet chemical methods. The need for greater, and more systematic, exploration of this currently unconventional approach to catalyst synthesis is highlighted.
Resumo:
In highly heterogeneous aquifer systems, conceptualization of regional groundwater flow models frequently results in the generalization or negligence of aquifer heterogeneities, both of which may result in erroneous model outputs. The calculation of equivalence related to hydrogeological parameters and applied to upscaling provides a means of accounting for measurement scale information but at regional scale. In this study, the Permo-Triassic Lagan Valley strategic aquifer in Northern Ireland is observed to be heterogeneous, if not discontinuous, due to subvertical trending low-permeability Tertiary dolerite dykes. Interpretation of ground and aerial magnetic surveys produces a deterministic solution to dyke locations. By measuring relative permeabilities of both the dykes and the sedimentary host rock, equivalent directional permeabilities, that determine anisotropy calculated as a function of dyke density, are obtained. This provides parameters for larger scale equivalent blocks, which can be directly imported to numerical groundwater flow models. Different conceptual models with different degrees of upscaling are numerically tested and results compared to regional flow observations. Simulation results show that the upscaled permeabilities from geophysical data allow one to properly account for the observed spatial variations of groundwater flow, without requiring artificial distribution of aquifer properties. It is also found that an intermediate degree of upscaling, between accounting for mapped field-scale dykes and accounting for one regional anisotropy value (maximum upscaling) provides results the closest to the observations at the regional scale.