62 resultados para Distributed parameter systems
Resumo:
Regional groundwater flow in high mountainous terrain is governed by a multitude of factors such as geology, topography, recharge conditions, structural elements such as fracturation and regional fault zones as well as man-made underground structures. By means of a numerical groundwater flow model, we consider the impact of deep underground tunnels and of an idealized major fault zone on the groundwater flow systems within the fractured Rotondo granite. The position of the free groundwater table as response to the above subsurface structures and, in particular, with regard to the influence of spatial distributed groundwater recharge rates is addressed. The model results show significant unsaturated zones below the mountain ridges in the study area with a thickness of up to several hundred metres. The subsurface galleries are shown to have a strong effect on the head distribution in the model domain, causing locally a reversal of natural head gradients. With respect to the position of the catchment areas to the tunnel and the corresponding type of recharge source for the tunnel inflows (i.e. glaciers or recent precipitation), as well as water table elevation, the influence of spatial distributed recharge rates is compared to uniform recharge rates. Water table elevations below the well exposed high-relief mountain ridges are observed to be more sensitive to changes in groundwater recharge rates and permeability than below ridges with less topographic relief. In the conceptual framework of the numerical simulations, the model fault zone has less influence on the groundwater table position, but more importantly acts as fast flow path for recharge from glaciated areas towards the subsurface galleries. This is in agreement with a previous study, where the imprint of glacial recharge was observed in the environmental isotope composition of groundwater sampled in the subsurface galleries. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
The growth of renewable power sources, distributed generation and the potential for alternative fuelled modes of transport such as electric vehicles has led to concerns over the ability of existing grid systems to facilitate such diverse portfolio mixes in already congested power systems. Internationally the growth in renewable energy sources is driven by government policy targets associated with the uncertainties of fossil fuel supplies, environmental issues and a move towards energy independence. Power grids were traditionally designed as vertically integrated centrally managed entities with fully dispatchable generating plant. Renewable power sources, distributed generation and alternative fuelled vehicles will place these power systems under additional stresses and strains due to their different operational characteristics. Energy storage and smart grid technologies are widely proposed as the tools to integrate these future diverse portfolio mixes within the more conventional power systems. The choice in these technologies is determined not only by their location on the grid system, but by the diversification in the power portfolio mix, the electricity market and the operational demands. This paper presents a high level technical and economic overview of the role and relevance of electrical energy storage and smart grid technologies in the next generation of renewable power systems.
Resumo:
Dynamic mechanical analysis (DMA) is an analytical technique in which an oscillating stress is applied to a sample and the resultant strain measured as functions of both oscillatory frequency and temperature. From this, a comprehensive knowledge of the relationships between the various viscoelastic parameters, e.g. storage and loss moduli, mechanical damping parameter (tan delta), dynamic viscosity, and temperature may be obtained. An introduction to the theory of DMA and pharmaceutical and biomedical examples of the use of this technique are presented in this concise review. In particular, examples are described in which DMA has been employed to quantify the storage and loss moduli of polymers, polymer damping properties, glass transition temperature(s), rate and extent of curing of polymer systems, polymer-polymer compatibility and identification of sol-gel transitions. Furthermore, future applications of the technique for the optimisation of the formulation of pharmaceutical and biomedical systems are discussed. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.
Resumo:
This paper elaborates on the ergodic capacity of fixed-gain amplify-and-forward (AF) dual-hop systems, which have recently attracted considerable research and industry interest. In particular, two novel capacity bounds that allow for fast and efficient computation and apply for nonidentically distributed hops are derived. More importantly, they are generic since they apply to a wide range of popular fading channel models. Specifically, the proposed upper bound applies to Nakagami-m, Weibull, and generalized-K fading channels, whereas the proposed lower bound is more general and applies to Rician fading channels. Moreover, it is explicitly demonstrated that the proposed lower and upper bounds become asymptotically exact in the high signal-to-noise ratio (SNR) regime. Based on our analytical expressions and numerical results, we gain valuable insights into the impact of model parameters on the capacity of fixed-gain AF dual-hop relaying systems. © 2011 IEEE.
Resumo:
System Dynamics enables modelling and simulation of highly non-linear feedback systems to predict future system behaviour. Parameter estimation and equation formulation are techniques in System Dynamics, used to retrieve the values of parameters or the equations for ?ows and/or variables. These techniques are crucial for the annotations and thereafter the simulation. This paper critically examines existing and well established approaches in parameter estimation and equation formulation along with their limitations, identifying performance gaps as well as providing directions for potential future research.
Resumo:
We study the effects of post-selection measurements on both the non-classicality of the state of a mechanical oscillator and the entanglement between two mechanical systems that are part of a distributed optomechanical network. We address the cases of both Gaussian and non-Gaussian measurements, identifying in which cases simple photon counting and Geiger-like measurements are effective in distilling a strongly non-classical mechanical state and enhancing the purely mechanical entanglement between two elements of the network.
Resumo:
We consider the distribution of entanglement from a multimode optical driving source to a network of remote and independent optomechanical systems. By focusing on the tripartite case, we analyse the effects that the features of the optical input states have on the degree and sharing structure of the distributed, fully mechanical, entanglement. This study, which is conducted looking at the mechanical steady state, highlights the structure of the entanglement distributed among the nodes and determines the relative efficiency between bipartite and tripartite entanglement transfer. We discuss a few open points, some of which are directed towards the bypassing of such limitations.
Resumo:
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletesan arbitrary node from the network, then the network responds by quickly adding a small number of new edges.
We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than O(log Delta) times its original diameter, where Delta is the maximum degree of the network initially. We note that for many peer-to-peer systems, Delta is polylogarithmic, so the diameter increase would be a O(loglog n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.
Resumo:
The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance Com- puting (HPC) storage systems, which are at the forefront of handling the data del- uge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flex- ible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs — both on the client and server sides — to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes ac- ceptable overhead.