892 resultados para Distributed parameter systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the authors’ distributed parameter approach for derivation of closed-form expressions for the four-pole parameters of the perforated three-duct muffler components. In this method, three simultaneous second-order partial differential equations are first reduced to a set of six first-order ordinary differential equations. These equations are then uncoupled by means of a modal matrix. The resulting 6 × 6 matrix is reduced to the 2 × 2 transfer matrix using the relevant boundary conditions. This is combined with transfer matrices of other elements (upstream and downstream of this perforated element) to predict muffler performance like noise reduction, which is also measured. The correlation between experimental and theoretical values of noise reduction is shown to be satisfactory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a coupled nonlinear oscillator model involving magnetization and strain to explain several experimentally observed dynamical features exhibited by forced magnetostrictive ribbon. Here we show that the model recovers the observed period-doubling route to chaos as function of the dc field for a fixed ac field and quasiperiodic route to chaos as a function of the ac field, keeping the dc field constant. The model also predicts induced and suppressed chaos under the influence of an additional small-amplitude near-resonant ac field. Our analysis suggests rich dynamics in coupled order-parameter systems such as magnetomartensitic and magnetoelectric materials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of the study is to examine the accuracy of and differences among simulated streamflows driven by rainfall estimates from a network of 22 rain gauges spread over a 2,170 km2 watershed, NEXRAD Stage III radar data, and Tropical Rainfall Measuring Mission (TRMM) 3B42 satellite data. The Gridded Surface Subsurface Hydrologic Analysis (GSSHA), a physically based, distributed parameter, grid-structured, hydrologic model, was used to simulate the June-2002 flooding event in the Upper Guadalupe River watershed in south central Texas. There were significant differences between the rainfall fields estimated by the three types of measurement technologies. These differences resulted in even larger differences in the simulated hydrologic response of the watershed. In general, simulations driven by radar rainfall yielded better results than those driven by satellite or rain-gauge estimates. This study also presents an overview of effects of land cover changes on runoff and stream discharge. The results demonstrate that, for major rainfall events similar to the 2002 event, the effect of urbanization on the watershed in the past two decades would not have made any significant effect on the hydrologic response. The effect of urbanization on the hydrologic response increases as the size of the rainfall event decreases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Clock synchronization is an extremely important requirement of wireless sensor networks(WSNs). There are many application scenarios such as weather monitoring and forecasting etc. where external clock synchronization may be required because WSN itself may consists of components which are not connected to each other. A usual approach for external clock synchronization in WSNs is to synchronize the clock of a reference node with an external source such as UTC, and the remaining nodes synchronize with the reference node using an internal clock synchronization protocol. In order to provide highly accurate time, both the offset and the drift rate of each clock with respect to reference node are estimated from time to time, and these are used for getting correct time from local clock reading. A problem with this approach is that it is difficult to estimate the offset of a clock with respect to the reference node when drift rate of clocks varies over a period of time. In this paper, we first propose a novel internal clock synchronization protocol based on weighted averaging technique, which synchronizes all the clocks of a WSN to a reference node periodically. We call this protocol weighted average based internal clock synchronization(WICS) protocol. Based on this protocol, we then propose our weighted average based external clock synchronization(WECS) protocol. We have analyzed the proposed protocols for maximum synchronization error and shown that it is always upper bounded. Extensive simulation studies of the proposed protocols have been carried out using Castalia simulator. Simulation results validate our theoretical claim that the maximum synchronization error is always upper bounded and also show that the proposed protocols perform better in comparison to other protocols in terms of synchronization accuracy. A prototype implementation of the proposed internal clock synchronization protocol using a few TelosB motes also validates our claim.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Liquefaction is a devastating instability associated with saturated, loose, and cohesionless soils. It poses a significant risk to distributed infrastructure systems that are vital for the security, economy, safety, health, and welfare of societies. In order to make our cities resilient to the effects of liquefaction, it is important to be able to identify areas that are most susceptible. Some of the prevalent methodologies employed to identify susceptible areas include conventional slope stability analysis and the use of so-called liquefaction charts. However, these methodologies have some limitations, which motivate our research objectives. In this dissertation, we investigate the mechanics of origin of liquefaction in a laboratory test using grain-scale simulations, which helps (i) understand why certain soils liquefy under certain conditions, and (ii) identify a necessary precursor for onset of flow liquefaction. Furthermore, we investigate the mechanics of liquefaction charts using a continuum plasticity model; this can help in modeling the surface hazards of liquefaction following an earthquake. Finally, we also investigate the microscopic definition of soil shear wave velocity, a soil property that is used as an index to quantify liquefaction resistance of soil. We show that anisotropy in fabric, or grain arrangement can be correlated with anisotropy in shear wave velocity. This has the potential to quantify the effects of sample disturbance when a soil specimen is extracted from the field. In conclusion, by developing a more fundamental understanding of soil liquefaction, this dissertation takes necessary steps for a more physical assessment of liquefaction susceptibility at the field-scale.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes work on radio over fiber distributed antenna systems for improving the quality of radio coverage for in-building applications. The DAS network has also been shown to provide improved detection for Gen 2 UHF RFID tags. Using pre-distortion to reduce the problem of the RFID second harmonic, a simple heterogeneous sensing and communications system is demonstrated. © 2011 NOrthumbria University.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines the impact of two simple precoding schemes on the capacity of 3 × 3 MIMO-enabled radio-over-fiber (RoF) distributed antenna systems (DAS) with excess transmit antennas. Specifically, phase-shift-only transmit beamforming and antenna selection are compared. It is found that for two typical indoor propagation scenarios, both strategies offer double the capacity gain that non-precoding MIMO DAS offers over traditional MIMO collocated antenna systems (CAS), with capacity improvements of 3.2-4.2 bit/s/Hz. Further, antenna selection shows similar performance to phase-only beamforming, differing by <0.5% and offering median capacities of 94 bit/s/Hz and 82 bit/s/Hz in the two propagation scenarios respectively. Because optical DASs enable precise, centralized control of remote antennas, they are well suited for implementing these beamforming schemes. Antenna selection, in particular, is a simple and effective means of increasing MIMO DAS capacity. © 2013 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

目前分布式组件系统通常需要中间件提供的横切关注点(crosscutting)实现非功能性特征,而这种设计方法在简化分布式组件系统开发过程的同时难以在设计时进行有效的性能预测.研究了一种基于方面模板的分布式组件系统性能预测方法,将横切关注点提炼为可复用的方面模板,并通过面向方面的建模技术,自动构建包括中间件横切关注点和中间件性能因素的完整组件性能模型.该模型的预测结果可辅助设计人员尽早地发现组件设计缺陷或帮助筛选备选方案.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

基于多智能体系统理论,研讨在非结构,不确定环境下面向复杂任务的多机器人分布式协调系统的实现原理,方法和技术。提出的递阶混合式协调结构,基于网络的通讯模式和基于有限状态机的规划与控制集成方法,充分考虑了复杂任务和真实自然环境的特点,通过构建一个全实物的多移动机器人实验平台,对规划,控制,传感,通讯,协调与合作的各关键技术进行了开发和集成,使多机器人分布式协调技术的研究直接面向实际应用,编队和物料搬运的演示实验结果展示了多机器人协调技术的广阔应用前景。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As the World Wide Web (Web) is increasingly adopted as the infrastructure for large-scale distributed information systems, issues of performance modeling become ever more critical. In particular, locality of reference is an important property in the performance modeling of distributed information systems. In the case of the Web, understanding the nature of reference locality will help improve the design of middleware, such as caching, prefetching, and document dissemination systems. For example, good measurements of reference locality would allow us to generate synthetic reference streams with accurate performance characteristics, would allow us to compare empirically measured streams to explain differences, and would allow us to predict expected performance for system design and capacity planning. In this paper we propose models for both temporal and spatial locality of reference in streams of requests arriving at Web servers. We show that simple models based only on document popularity (likelihood of reference) are insufficient for capturing either temporal or spatial locality. Instead, we rely on an equivalent, but numerical, representation of a reference stream: a stack distance trace. We show that temporal locality can be characterized by the marginal distribution of the stack distance trace, and we propose models for typical distributions and compare their cache performance to our traces. We also show that spatial locality in a reference stream can be characterized using the notion of self-similarity. Self-similarity describes long-range correlations in the dataset, which is a property that previous researchers have found hard to incorporate into synthetic reference strings. We show that stack distance strings appear to be strongly self-similar, and we provide measurements of the degree of self-similarity in our traces. Finally, we discuss methods for generating synthetic Web traces that exhibit the properties of temporal and spatial locality that we measured in our data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Parallel computing is now widely used in numerical simulation, particularly for application codes based on finite difference and finite element methods. A popular and successful technique employed to parallelize such codes onto large distributed memory systems is to partition the mesh into sub-domains that are then allocated to processors. The code then executes in parallel, using the SPMD methodology, with message passing for inter-processor interactions. In order to improve the parallel efficiency of an imbalanced structured mesh CFD code, a new dynamic load balancing (DLB) strategy has been developed in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to coincidental limits. The ‘local’ partition limit change allows greater flexibility in obtaining a balanced load distribution, as the workload increase, or decrease, on a processor is no longer restricted by the ‘global’ (coincidental) limit change. The automatic implementation of this generic DLB strategy within an existing parallel code is presented in this chapter, along with some preliminary results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Code parallelization using OpenMP for shared memory systems is relatively easier than using message passing for distributed memory systems. Despite this, it is still a challenge to use OpenMP to parallelize application codes in a way that yields effective scalable performance when executed on a shared memory parallel system. We describe an environment that will assist the programmer in the various tasks of code parallelization and this is achieved in a greatly reduced time frame and level of skill required. The parallelization environment includes a number of tools that address the main tasks of parallelism detection, OpenMP source code generation, debugging and optimization. These tools include a high quality, fully interprocedural dependence analysis with user interaction capabilities to facilitate the generation of efficient parallel code, an automatic relative debugging tool to identify erroneous user decisions in that interaction and also performance profiling to identify bottlenecks. Finally, experiences of parallelizing some NASA application codes are presented to illustrate some of the benefits of using the evolving environment.