819 resultados para Distributed Denial of Service
Resumo:
In this paper, we design a new dynamic packet scheduling scheme suitable for differentiated service (DiffServ) network. Designed dynamic benefit weighted scheduling (DBWS) uses a dynamic weighted computation scheme loosely based on weighted round robin (WRR) policy. It predicts the weight required by expedited forwarding (EF) service for the current time slot (t) based on two criteria; (i) previous weight allocated to it at time (t-1), and (ii) the average increase in the queue length of EF buffer. This prediction provides smooth bandwidth allocation to all the services by avoiding overbooking of resources for EF service and still providing guaranteed services for it. The performance is analyzed for various scenarios at high, medium and low traffic conditions. The results show that packet loss is minimized, end to end delay is minimized and jitter is reduced and therefore meet quality of service (QoS) requirement of a network.
Resumo:
The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a probabilistic prediction based approach for providing Quality of Service (QoS) to delay sensitive traffic for Internet of Things (IoT). A joint packet scheduling and dynamic bandwidth allocation scheme is proposed to provide service differentiation and preferential treatment to delay sensitive traffic. The scheduler focuses on reducing the waiting time of high priority delay sensitive services in the queue and simultaneously keeping the waiting time of other services within tolerable limits. The scheme uses the difference in probability of average queue length of high priority packets at previous cycle and current cycle to determine the probability of average weight required in the current cycle. This offers optimized bandwidth allocation to all the services by avoiding distribution of excess resources for high priority services and yet guaranteeing the services for it. The performance of the algorithm is investigated using MPEG-4 traffic traces under different system loading. The results show the improved performance with respect to waiting time for scheduling high priority packets and simultaneously keeping tolerable limits for waiting time and packet loss for other services. Crown Copyright (C) 2015 Published by Elsevier B.V.
Resumo:
In the context of wireless sensor networks, we are motivated by the design of a tree network spanning a set of source nodes that generate packets, a set of additional relay nodes that only forward packets from the sources, and a data sink. We assume that the paths from the sources to the sink have bounded hop count, that the nodes use the IEEE 802.15.4 CSMA/CA for medium access control, and that there are no hidden terminals. In this setting, starting with a set of simple fixed point equations, we derive explicit conditions on the packet generation rates at the sources, so that the tree network approximately provides certain quality of service (QoS) such as end-to-end delivery probability and mean delay. The structures of our conditions provide insight on the dependence of the network performance on the arrival rate vector, and the topological properties of the tree network. Our numerical experiments suggest that our approximations are able to capture a significant part of the QoS aware throughput region (of a tree network), that is adequate for many sensor network applications. Furthermore, for the special case of equal arrival rates, default backoff parameters, and for a range of values of target QoS, we show that among all path-length-bounded trees (spanning a given set of sources and the data sink) that meet the conditions derived in the paper, a shortest path tree achieves the maximum throughput. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
We develop an approximate analytical technique for evaluating the performance of multi-hop networks based on beaconless IEEE 802.15.4 ( the ``ZigBee'' PHY and MAC), a popular standard for wireless sensor networks. The network comprises sensor nodes, which generate measurement packets, relay nodes which only forward packets, and a data sink (base station). We consider a detailed stochastic process at each node, and analyse this process taking into account the interaction with neighbouring nodes via certain time averaged unknown variables (e.g., channel sensing rates, collision probabilities, etc.). By coupling the analyses at various nodes, we obtain fixed point equations that can be solved numerically to obtain the unknown variables, thereby yielding approximations of time average performance measures, such as packet discard probabilities and average queueing delays. The model incorporates packet generation at the sensor nodes and queues at the sensor nodes and relay nodes. We demonstrate the accuracy of our model by an extensive comparison with simulations. As an additional assessment of the accuracy of the model, we utilize it in an algorithm for sensor network design with quality-of-service (QoS) objectives, and show that designs obtained using our model actually satisfy the QoS constraints (as validated by simulating the networks), and the predictions are accurate to well within 10% as compared to the simulation results in a regime where the packet discard probability is low. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
A new in situ method was realized by one step laser cladding to produce Ni-base alloy composite coating reinforced by in situ reacted and gradiently distributed TiCp particles. The submicron TiCp particles were formed and uniformly distributed because of the in situ reaction and trapping effect under the rapid solidification condition. And, TiCp particles were of gradient distribution on a macro scale and their volume fraction increased from 1.86% at the layer/substrate interface to a maximum 38.4% at the surface of the layer. Furthermore, the in situ generated TiCp/gamma-Ni interfaces were free from deleterious surface reactions. Additionally, the clad coating also revealed a high microhardness of gradient variation with the layer depth and the superior abrasive wear resistance.
Resumo:
We investigate the steady state natural ventilation of an enclosed space in which vent A, located at height hA above the floor, is connected to a vertical stack with a termination at height H, while the second vent, B, at height hB above the floor, connects directly to the exterior. We first examine the flow regimes which develop with a distributed source of heating at the base of the space. If hBhB>hA, then two different flow regimes may develop. Either (i) there is inflow through vent B and outflow through vent A, or (ii) the flow reverses, with inflow down the stack into vent A and outflow through vent B. With inflow through vent A, the internal temperature and ventilation rate depend on the relative height of the two vents, A and B, while with inflow through vent B, they depend on the height of vent B relative to the height of the termination of the stack H. With a point source of heating, a similar transition occurs, with a unique flow regime when vent B is lower than vent A, and two possible regimes with vent B higher than vent A. In general, with a point source of buoyancy, each steady state is characterised by a two-layer density stratification. Depending on the relative heights of the two vents, in the case of outflow through vent A connected to the stack, the interface between these layers may lie above, at the same level as or below vent A, leading to discharge of either pure upper layer, a mixture of upper and lower layer, or pure lower layer fluid. In the case of inflow through vent A connected to the stack, the interface always lies below the outflow vent B. Also, in this case, if the inflow vent A lies above the interface, then the lower layer becomes of intermediate density between the upper layer and the external fluid, whereas if the interface lies above the inflow vent A, then the lower layer is composed purely of external fluid. We develop expressions to predict the transitions between these flow regimes, in terms of the heights and areas of the two vents and the stack, and we successfully test these with new laboratory experiments. We conclude with a discussion of the implications of our results for real buildings.
Resumo:
We show that the sensor localization problem can be cast as a static parameter estimation problem for Hidden Markov Models and we develop fully decentralized versions of the Recursive Maximum Likelihood and the Expectation-Maximization algorithms to localize the network. For linear Gaussian models, our algorithms can be implemented exactly using a distributed version of the Kalman filter and a message passing algorithm to propagate the derivatives of the likelihood. In the non-linear case, a solution based on local linearization in the spirit of the Extended Kalman Filter is proposed. In numerical examples we show that the developed algorithms are able to learn the localization parameters well.
Resumo:
Scalable video coding allows an efficient provision of video services at different quality levels with different energy demands. According to the specific type of service and network scenario, end users and/or operators may decide to choose among different energy versus quality combinations. In order to deal with the resulting trade-off, in this paper we analyze the number of video layers that are worth to be received taking into account the energy constraints. A single-objective optimization is proposed based on dynamically selecting the number of layers, which is able to minimize the energy consumption with the constraint of a minimal quality threshold to be reached. However, this approach cannot reflect the fact that the same increment of energy consumption may result in different increments of visual quality. Thus, a multiobjective optimization is proposed and a utility function is defined in order to weight the energy consumption and the visual quality criteria. Finally, since the optimization solving mechanism is computationally expensive to be implemented in mobile devices, a heuristic algorithm is proposed. This way, significant energy consumption reduction will be achieved while keeping reasonable quality levels.
Resumo:
[EN]The present research work, based on some of the components of the Common Assessment Framework, sets to analyse the influence held by leadership in specific factors that constitute the organisational climate, and also the impact that these factors have on the quality of municipal public services. For the purposes of this study, we propose Likert’s exploitative autocratic and participative leadership styles to explain the genesis, structure and workflow. As far as the organisational climate is concerned, the variables used are motivation, satisfaction, empowerment, conflict and stress. The main conclusions that arose was that a participative leader confers higher relevance to the quality of service, through motivation, satisfaction, empowerment and human resources positive results, than an exploitative autocratic leader. Performed contributions are based on the empiric research hereby presented, and new research guidelines are proposed. The research methodology used was qualitative, based on the case study.
Resumo:
Enhancing the handover process in broadband wireless communication deployment has traditionally motivated many research initiatives. In a high-speed railway domain, the challenge is even greater. Owing to the long distances covered, the mobile node gets involved in a compulsory sequence of handover processes. Consequently, poor performance during the execution of these handover processes significantly degrades the global end-to-end performance. This article proposes a new handover strategy for the railway domain: the RMPA handover, a Reliable Mobility Pattern Aware IEEE 802.16 handover strategy "customized" for a high-speed mobility scenario. The stringent high mobility feature is balanced with three other positive features in a high-speed context: mobility pattern awareness, different sources for location discovery techniques, and a previously known traffic data profile. To the best of the authors' knowledge, there is no IEEE 802.16 handover scheme that simultaneously covers the optimization of the handover process itself and the efficient timing of the handover process. Our strategy covers both areas of research while providing a cost-effective and standards-based solution. To schedule the handover process efficiently, the RMPA strategy makes use of a context aware handover policy; that is, a handover policy based on the mobile node mobility pattern, the time required to perform the handover, the neighboring network conditions, the data traffic profile, the received power signal, and current location and speed information of the train. Our proposal merges all these variables in a cross layer interaction in the handover policy engine. It also enhances the handover process itself by establishing the values for the set of handover configuration parameters and mechanisms of the handover process. RMPA is a cost-effective strategy because compatibility with standards-based equipment is guaranteed. The major contributions of the RMPA handover are in areas that have been left open to the handover designer's discretion. Our simulation analysis validates the RMPA handover decision rules and design choices. Our results supporting a high-demand video application in the uplink stream show a significant improvement in the end-to-end quality of service parameters, including end-to-end delay (22%) and jitter (80%), when compared with a policy based on signal-to-noise-ratio information.
Resumo:
32 p.
Biophysical and network mechanisms of high frequency extracellular potentials in the rat hippocampus
Resumo:
A fundamental question in neuroscience is how distributed networks of neurons communicate and coordinate dynamically and specifically. Several models propose that oscillating local networks can transiently couple to each other through phase-locked firing. Coherent local field potentials (LFP) between synaptically connected regions is often presented as evidence for such coupling. The physiological correlates of LFP signals depend on many anatomical and physiological factors, however, and how the underlying neural processes collectively generate features of different spatiotemporal scales is poorly understood. High frequency oscillations in the hippocampus, including gamma rhythms (30-100 Hz) that are organized by the theta oscillations (5-10 Hz) during active exploration and REM sleep, as well as sharp wave-ripples (SWRs, 140-200 Hz) during immobility or slow wave sleep, have each been associated with various aspects of learning and memory. Deciphering their physiology and functional consequences is crucial to understanding the operation of the hippocampal network.
We investigated the origins and coordination of high frequency LFPs in the hippocampo-entorhinal network using both biophysical models and analyses of large-scale recordings in behaving and sleeping rats. We found that the synchronization of pyramidal cell spikes substantially shapes, or even dominates, the electrical signature of SWRs in area CA1 of the hippocampus. The precise mechanisms coordinating this synchrony are still unresolved, but they appear to also affect CA1 activity during theta oscillations. The input to CA1, which often arrives in the form of gamma-frequency waves of activity from area CA3 and layer 3 of entorhinal cortex (EC3), did not strongly influence the timing of CA1 pyramidal cells. Rather, our data are more consistent with local network interactions governing pyramidal cells' spike timing during the integration of their inputs. Furthermore, the relative timing of input from EC3 and CA3 during the theta cycle matched that found in previous work to engage mechanisms for synapse modification and active dendritic processes. Our work demonstrates how local networks interact with upstream inputs to generate a coordinated hippocampal output during behavior and sleep, in the form of theta-gamma coupling and SWRs.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
The behaviour in the feeding process and the functional morphology of Lathonura rectirostris O.F. Muller - one of the widely distributed species of macrothricids - is studied. The current work is an attempt at morpho-functional analysis of the apparatus of the trunk appendages of Lathonura rectirostris O.F. Muller. This highly specialized species, the method of feeding of which basically comes to the mechanical scraping-off and collection of epiphytic single-celled algae and particles deposited on the surface of aquatic plants.