51 resultados para Distributed lag model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the lack of proper constraints in understanding earthquake mechanisms in the cratonic interiors and the general absence of good quality database, here we reassess the seismic hazard in the province of Kerala, a part of the aEuro cent stable continental interioraEuro cent, based on an improved historical and instrumental database. The temporal pattern of the current seismicity suggests that > 60% of the microtremors in Kerala occurs with a time lag after the peak rainfall, indicating that hydroseismicity may be a plausible model to explain the low-level seismicity in this region. Further, an increment in overall seismicity rate in the region in the recent years is explained as due to increased anthropogenic activities, which includes changes in hydrological pathways as a consequence of rapid landscape changes. Our analyses of the historical database eliminate a few events that are ascribed to this region; this exercise has also led to identification of a few events, not previously noted. The improved historical database essentially suggests that the central midland region is more prone to seismic activity compared to other parts of Kerala. This region appears to have generated larger number of significant earthquakes; the most prominent being the multiple events (doublets) of 1856 and 1953, whose magnitudes are comparable to that of the 2000/2001 (central Kerala) events. Occurrences of these historical events and the recent earthquakes, and the local geology indicative of pervasive faulting as shown by widely distributed pseudotachylite veins suggest that the NNW-SSE trending faults in central midland Kerala may host discrete potentially active sources that may be capable of generating light to moderate size earthquakes. The frequency of earthquakes in central Kerala evident from the historical database requires that the seismic codes stipulated for this region are made mandatory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stimulation technique has gained much importance in the performance studies of Concurrency Control (CC) algorithms for distributed database systems. However, details regarding the simulation methodology and implementation are seldom mentioned in the literature. One objective of this paper is to elaborate the simulation methodology using SIMULA. Detailed studies have been carried out on a centralised CC algorithm and its modified version. The results compare well with a previously reported study on these algorithms. Here, additional results concerning the update intensiveness of transactions and the degree of conflict are obtained. The degree of conflict is quantitatively measured and it is seen to be a useful performance index. Regression analysis has been carried out on the results, and an optimisation study using the regression model has been performed to minimise the response time. Such a study may prove useful for the design of distributed database systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is shown how the single-site coherent potential approximation and the averaged T-matrix approximation become exact in the calculation of the averaged single-particle Green function of the electron in the Anderson model when the site energy is distributed randomly with lorentzian distribution. Using these approximations, Lloyd's exact result is reproduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We extend the modeling heuristic of (Harsha et al. 2006. In IEEE IWQoS 06, pp 178 - 187) to evaluate the performance of an IEEE 802.11e infrastructure network carrying packet telephone calls, streaming video sessions and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We identify the time boundaries of activities on the channel (called channel slot boundaries) and derive a Markov Renewal Process of the contending nodes on these epochs. This is achieved by the use of attempt probabilities of the contending nodes as those obtained from the saturation fixed point analysis of (Ramaiyan et al. 2005. In Proceedings ACM Sigmetrics, `05. Journal version accepted for publication in IEEE TON). Regenerative analysis on this MRP yields the desired steady state performance measures. We then use the MRP model to develop an effective bandwidth approach for obtaining a bound on the size of the buffer required at the video queue of the AP, such that the streaming video packet loss probability is kept to less than 1%. The results obtained match well with simulations using the network simulator, ns-2. We find that, with the default IEEE 802.11e EDCA parameters for access categories AC 1, AC 2 and AC 3, the voice call capacity decreases if even one streaming video session and one TCP file download are initiated by some wireless station. Subsequently, reducing the voice calls increases the video downlink stream throughput by 0.38 Mbps and file download capacity by 0.14 Mbps, for every voice call (for the 11 Mbps PHY). We find that a buffer size of 75KB is sufficient to ensure that the video packet loss probability at the QAP is within 1%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of distributed space-time coding with reduced decoding complexity for wireless relay network. The transmission protocol follows a two-hop model wherein the source transmits a vector in the first hop and in the second hop the relays transmit a vector, which is a transformation of the received vector by a relay-specific unitary transformation. Design criteria is derived for this system model and codes are proposed that achieve full diversity. For a fixed number of relay nodes, the general system model considered in this paper admits code constructions with lower decoding complexity compared to codes based on some earlier system models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of the IEEE 802.11e standard for WLANs, we provide an analytical model for obtaining the maximum number of VoIP calls that can be supported on HCCA, such that the delay QoS constraint of the accepted calls is met, when TCP downloads are coexistent on EDCA. In this scenario, we derive the TCP download throughput by using an analytical model for the case where only TCP sessions are present in the WLAN. We show that the analytical model for combined voice and TCP transfers provides accurate results in comparison with simulations (using ns-2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We build upon the fixed point analysis and performance insights in [1]. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures.The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The author presents adaptive control techniques for controlling the flow of real-time jobs from the peripheral processors (PPs) to the central processor (CP) of a distributed system with a star topology. He considers two classes of flow control mechanisms: (1) proportional control, where a certain proportion of the load offered to each PP is sent to the CP, and (2) threshold control, where there is a maximum rate at which each PP can send jobs to the CP. The problem is to obtain good algorithms for dynamically adjusting the control level at each PP in order to prevent overload of the CP, when the load offered by the PPs is unknown and varying. The author formulates the problem approximately as a standard system control problem in which the system has unknown parameters that are subject to change. Using well-known techniques (e.g., naive-feedback-controller and stochastic approximation techniques), he derives adaptive controls for the system control problem. He demonstrates the efficacy of these controls in the original problem by using the control algorithms in simulations of a queuing model of the CP and the load controls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In earlier work, nonisomorphic graphs have been converted into networks to realize Multistage Interconnection networks, which are topologically nonequivalent to the Baseline network. The drawback of this technique is that these nonequivalent networks are not guaranteed to be self-routing, because each node in the graph model can be replaced by a (2 × 2) switch in any one of the four different configurations. Hence, the problem of routing in these networks remains unsolved. Moreover, nonisomorphic graphs were obtained by interconnecting bipartite loops in a heuristic manner; the heuristic nature of this procedure makes it difficult to guarantee full connectivity in large networks. We solve these problems through a direct approach, in which a matrix model for self-routing networks is developed. An example is given to show that this model encompases nonequivalent self-routing networks. This approach has the additional advantage in that the matrix model itself ensures full connectivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of lag damping is difficult owing to the delicate balance of drag, induced drag and Coriolis forces in the in‐plane direction. Moreover, induced drag” is sensitive to dynamic wake, bath shed and trailing components, and thus its prediction requires adequate unsteady‐wake representation. Accordingly, rigid‐blade flap‐lag equations are coupled with a three‐dimensional finite‐state wake model; three isolatcd rotor canfigurations with three, four and five blades are treated over a range of thrust levels, tack numbers, lag frequencies and advance ratios. The investigation includes convergence characteristics of damping with respect to the number of radial shape functions and harmonics of the wake model for multiblade modes of low frequency (< 1/ rev.) to high frequency (> 1/rev.). Predicted flap and lag damping levels are then compared with similar predictions with 1) rigid wake (no unsteady induced now), 2) Loewy lift deficiency and 3) dynamic inflow. The coverage also includes correlations with the measured lag regressive‐mode damping in hover and forward flight and comparisons with similar correlations with dynamic inflow. Lag‐damping predictions with the dynamic wake model are consistently higher than the predictions with the dynamic inflow model; even for the low frequency lag regressive mode, the number of wake harmonics should at least be equal to twice the number of blades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal specification is vital to the development of distributed real-time systems as these systems are inherently complex and safety-critical. It is widely acknowledged that formal specification and automatic analysis of specifications can significantly increase system reliability. Although a number of specification techniques for real-time systems have been reported in the literature, most of these formalisms do not adequately address to the constraints that the aspects of 'distribution' and 'real-time' impose on specifications. Further, an automatic verification tool is necessary to reduce human errors in the reasoning process. In this regard, this paper is an attempt towards the development of a novel executable specification language for distributed real-time systems. First, we give a precise characterization of the syntax and semantics of DL. Subsequently, we discuss the problems of model checking, automatic verification of satisfiability of DL specifications, and testing conformance of event traces with DL specifications. Effective solutions to these problems are presented as extensions to the classical first-order tableau algorithm. The use of the proposed framework is illustrated by specifying a sample problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new class of nets, called S-nets, is introduced for the performance analysis of scheduling algorithms used in real-time systems Deterministic timed Petri nets do not adequately model the scheduling of resources encountered in real-time systems, and need to be augmented with resource places and signal places, and a scheduler block, to facilitate the modeling of scheduling algorithms. The tokens are colored, and the transition firing rules are suitably modified. Further, the concept of transition folding is used, to get intuitively simple models of multiframe real-time systems. Two generic performance measures, called �load index� and �balance index,� which characterize the resource utilization and the uniformity of workload distribution, respectively, are defined. The utility of S-nets for evaluating heuristic-based scheduling schemes is illustrated by considering three heuristics for real-time scheduling. S-nets are useful in tuning the hardware configuration and the underlying scheduling policy, so that the system utilization is maximized, and the workload distribution among the computing resources is balanced.