977 resultados para Worst-case dimensioning
Resumo:
Time-sensitive Wireless Sensor Network (WSN) applications require finite delay bounds in critical situations. This paper provides a methodology for the modeling and the worst-case dimensioning of cluster-tree WSNs. We provide a fine model of the worst-case cluster-tree topology characterized by its depth, the maximum number of child routers and the maximum number of child nodes for each parent router. Using Network Calculus, we derive “plug-and-play” expressions for the endto- end delay bounds, buffering and bandwidth requirements as a function of the WSN cluster-tree characteristics and traffic specifications. The cluster-tree topology has been adopted by many cluster-based solutions for WSNs. We demonstrate how to apply our general results for dimensioning IEEE 802.15.4/Zigbee cluster-tree WSNs. We believe that this paper shows the fundamental performance limits of cluster-tree wireless sensor networks by the provision of a simple and effective methodology for the design of such WSNs.
Resumo:
Modeling the fundamental performance limits of Wireless Sensor Networks (WSNs) is of paramount importance to understand their behavior under the worst-case conditions and to make the appropriate design choices. This is particular relevant for time-sensitive WSN applications, where the timing behavior of the network protocols (message transmission must respect deadlines) impacts on the correct operation of these applications. In that direction this paper contributes with a methodology based on Network Calculus, which enables quick and efficient worst-case dimensioning of static or even dynamically changing cluster-tree WSNs where the data sink can either be static or mobile. We propose closed-form recurrent expressions for computing the worst-case end-to-end delays, buffering and bandwidth requirements across any source-destination path in a cluster-tree WSN. We show how to apply our methodology to the case of IEEE 802.15.4/ZigBee cluster-tree WSNs. Finally, we demonstrate the validity and analyze the accuracy of our methodology through a comprehensive experimental study using commercially available technology, namely TelosB motes running TinyOS.
Resumo:
Fieldbus networks aim at the interconnection of field devices such as sensors, actuators and small controllers. Therefore, they are an effective technology upon which Distributed Computer Controlled Systems (DCCS) can be built. DCCS impose strict timeliness requirements to the communication network. In essence, by timeliness requirements we mean that traffic must be sent and received within a bounded interval, otherwise a timing fault is said to occur. P-NET is a multi-master fieldbus standard based on a virtual token passing scheme. In P-NET each master is allowed to transmit only one message per token visit, which means that in the worst-case the communication response time could be derived considering that the token is fully utilised by all stations. However, such analysis can be proved to be quite pessimistic. In this paper we propose a more sophisticated P-NET timing analysis model, which considers the actual token utilisation by different masters. The major contribution of this model is to provide a less pessimistic, and thus more accurate, analysis for the evaluation of the worst-case communication response time in P-NET fieldbus networks.
Resumo:
"Many-core” systems based on the Network-on- Chip (NoC) architecture have brought into the fore-front various opportunities and challenges for the deployment of real-time systems. Such real-time systems need timing guarantees to be fulfilled. Therefore, calculating upper-bounds on the end-to-end communication delay between system components is of primary interest. In this work, we identify the limitations of an existing approach proposed by [1] and propose different techniques to overcome these limitations.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Resumo:
The heat waves of 2003 in Western Europe and 2010 in Russia, commonly labelled as rare climatic anomalies outside of previous experience, are often taken as harbingers of more frequent extremes in the global warming-influenced future. However, a recent reconstruction of spring–summer temperatures for WE resulted in the likelihood of significantly higher temperatures in 1540. In order to check the plausibility of this result we investigated the severity of the 1540 drought by putting forward the argument of the known soil desiccation-temperature feedback. Based on more than 300 first-hand documentary weather report sources originating from an area of 2 to 3 million km2, we show that Europe was affected by an unprecedented 11-month-long Megadrought. The estimated number of precipitation days and precipitation amount for Central and Western Europe in 1540 is significantly lower than the 100-year minima of the instrumental measurement period for spring, summer and autumn. This result is supported by independent documentary evidence about extremely low river flows and Europe-wide wild-, forest- and settlement fires. We found that an event of this severity cannot be simulated by state-of-the-art climate models.
Resumo:
In the standard Vehicle Routing Problem (VRP), we route a fleet of vehicles to deliver the demands of all customers such that the total distance traveled by the fleet is minimized. In this dissertation, we study variants of the VRP that minimize the completion time, i.e., we minimize the distance of the longest route. We call it the min-max objective function. In applications such as disaster relief efforts and military operations, the objective is often to finish the delivery or the task as soon as possible, not to plan routes with the minimum total distance. Even in commercial package delivery nowadays, companies are investing in new technologies to speed up delivery instead of focusing merely on the min-sum objective. In this dissertation, we compare the min-max and the standard (min-sum) objective functions in a worst-case analysis to show that the optimal solution with respect to one objective function can be very poor with respect to the other. The results motivate the design of algorithms specifically for the min-max objective. We study variants of min-max VRPs including one problem from the literature (the min-max Multi-Depot VRP) and two new problems (the min-max Split Delivery Multi-Depot VRP with Minimum Service Requirement and the min-max Close-Enough VRP). We develop heuristics to solve these three problems. We compare the results produced by our heuristics to the best-known solutions in the literature and find that our algorithms are effective. In the case where benchmark instances are not available, we generate instances whose near-optimal solutions can be estimated based on geometry. We formulate the Vehicle Routing Problem with Drones and carry out a theoretical analysis to show the maximum benefit from using drones in addition to trucks to reduce delivery time. The speed-up ratio depends on the number of drones loaded onto one truck and the speed of the drone relative to the speed of the truck.
Resumo:
Modeling the fundamental performance limits of Wireless Sensor Networks (WSNs) is of paramount importance to understand their behavior under worst-case conditions and to make the appropriate design choices. In that direction this paper contributes with an analytical methodology for modeling cluster-tree WSNs where the data sink can either be static or mobile. We assess the validity and pessimism of analytical model by comparing the worst-case results with the values measured through an experimental test-bed based on Commercial-Off- The-Shelf (COTS) technologies, namely TelosB motes running TinyOS.
Resumo:
Embedded real-time applications increasingly present high computation requirements, which need to be completed within specific deadlines, but that present highly variable patterns, depending on the set of data available in a determined instant. The current trend to provide parallel processing in the embedded domain allows providing higher processing power; however, it does not address the variability in the processing pattern. Dimensioning each device for its worst-case scenario implies lower average utilization, and increased available, but unusable, processing in the overall system. A solution for this problem is to extend the parallel execution of the applications, allowing networked nodes to distribute the workload, on peak situations, to neighbour nodes. In this context, this report proposes a framework to develop parallel and distributed real-time embedded applications, transparently using OpenMP and Message Passing Interface (MPI), within a programming model based on OpenMP. The technical report also devises an integrated timing model, which enables the structured reasoning on the timing behaviour of these hybrid architectures.
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Teollisuuden jäähdytysjärjestelmiä tarvitaan prosessien lämpötilan ja paineen hal-litsemiseen. Vesi on käytetyin lämmönsiirtoaine hyvän saatavuutensa, halvan hin-nan ja korkean lämmönsiirtokyvyn ansiosta. Jäähdytysjärjestelmät jaetaan kolmeen päätyyppiin, joita ovat läpivirtausjäähdytys, suljettu ja avoin kiertojäähdytys. Kullakin järjestelmätyypillä on tyypilliset alatyyppinsä. Avoimella kiertojär-jestelmällä on eniten alatyyppejä, joista yleisin on jäähdytystorni. Jäähdytystorneja on kolmea tyyppiä: märkä-, kuiva ja hybriditorni. Kullakin järjestelmätyypillä on ominaiset piirteensä käyttökohteiden, ympäristövaikutusten, ohjattavuuden, investointi- ja käyttökulujen suhteen, joita tässä työssä esitellään. Työssä tutkitaan teollisuuden jäähdytysjärjestelmien esittelyn lisäksi erään ali-painekaasunpoistimen soveltuvuutta suljetun kiertojäähdytysjärjestelmän kaasun-poistoon. Suljettuun kiertojäähdytysjärjestelmään jää ilmaa täyttövaiheessa ja kul-keutuu liuenneena käytettävän jäähdytysveden mukana. Muodostuva ylikylläinen seos synnyttää veden sekaan ilmakuplia, jotka aiheuttavat korroosiota kemiallisesti ja kuluttamalla. Lisäksi kaasukuplat vievät tilavuutta nesteeltä. Tämä pienentää järjestelmän jäähdytystehoa merkittävästi, koska kaasun lämmönsiirtokyky verrat-tuna veden lämmönsiirtokykyyn on pieni. Työssä esitellään myös muita mahdolli-sia suljetun järjestelmän kaasulähteitä ja niiden aiheuttamia ongelmia. Alipainekaasunpoistimen kaasunerotustehokkuutta mitattiin jäähdytysvesinäyttei-den selkeytymisnopeudella ja lämmönsiirtimien tehon paranemisella. Kahden viikon tarkastelujaksolla selkeytymisajat paranivat 36–60 % eri mittauspaikoissa ja lämmönsiirtimien tehot paranivat 6–29 %. Järjestelmään kuitenkin jäi merkittävä määrä kaasua, vaikka laitteen käyttöä jatkettiin tarkastelujakson jälkeen, joten tavoitteisiin ei päästy. Tutkitun alipainekaasunpoistolaitteen ei todettu soveltuvan tehdasympäristöön kestämättömyyden, hankalakäyttöisyyden ja tehottomuuden takia. Tulokset kuitenkin osoittavat, että kaasunerotuksella on merkittävä vaikutus suljetun jäähdytysjärjestelmän toimivuuteen ja saavutettavaan jäähdytystehoon.
Resumo:
Centrifugal pumps are widely used in industrial and municipal applications, and they are an important end-use application of electric energy. However, in many cases centrifugal pumps operate with a significantly lower energy efficiency than they actually could, which typically has an increasing effect on the pump energy consumption and the resulting energy costs. Typical reasons for this are the incorrect dimensioning of the pumping system components and inefficiency of the applied pump control method. Besides the increase in energy costs, an inefficient operation may increase the risk of a pump failure and thereby the maintenance costs. In the worst case, a pump failure may lead to a process shutdown accruing additional costs. Nowadays, centrifugal pumps are often controlled by adjusting their rotational speed, which affects the resulting flow rate and output pressure of the pumped fluid. Typically, the speed control is realised with a frequency converter that allows the control of the rotational speed of an induction motor. Since a frequency converter can estimate the motor rotational speed and shaft torque without external measurement sensors on the motor shaft, it also allows the development and use of sensorless methods for the estimation of the pump operation. Still today, the monitoring of pump operation is based on additional measurements and visual check-ups, which may not be applicable to determine the energy efficiency of the pump operation. This doctoral thesis concentrates on the methods that allow the use of a frequency converter as a monitoring and analysis device for a centrifugal pump. Firstly, the determination of energy-efficiency- and reliability-based limits for the recommendable operating region of a variable-speed-driven centrifugal pump is discussed with a case study for the laboratory pumping system. Then, three model-based estimation methods for the pump operating location are studied, and their accuracy is determined by laboratory tests. In addition, a novel method to detect the occurrence of cavitation or flow recirculation in a centrifugal pump by a frequency converter is introduced. Its sensitivity compared with known cavitation detection methods is evaluated, and its applicability is verified by laboratory measurements for three different pumps and by using two different frequency converters. The main focus of this thesis is on the radial flow end-suction centrifugal pumps, but the studied methods can also be feasible with mixed and axial flow centrifugal pumps, if allowed by their characteristics.
Resumo:
Service provider selection has been said to be a critical factor in the formation of supply chains. Through successful selection companies can attain competitive advantage, cost savings and more flexible operations. Service provider management is the next crucial step in outsourcing process after the selection has been made. Without proper management companies cannot be sure about the level of service they have bought and they may suffer from service provider's opportunistic behavior. In worst case scenario the buyer company may end up in locked-in situation in which it is totally dependent of the service provider. This thesis studies how the case company conducts its carrier selection process along with the criteria related to it. A model for the final selection is also provided. In addition, case company's carrier management procedures are reflected against recommendations from previous researches. The research was conducted as a qualitative case study on the principal company, Neste Oil Retail. A literature review was made on outsourcing, service provider selection and service provider management. On the basis of the literature review, this thesis ended up recommending Analytic hierarchy process as the preferred model for the carrier selection. Furthermore, Agency theory was seen to be a functional framework for carrier management in this study. Empirical part of this thesis was conducted in the case company by interviewing the key persons in the selection process, making observations and going through documentations related to the subject. According to the results from the study, both carrier selection process as well as carrier management were closely in line with suggestions from literature review. Analytic hierarchy process results revealed that the case company considers service quality as the most important criteria with financial situation and price of service following behind with almost identical weights with each other. Equipment and personnel was seen as the least important selection criterion. Regarding carrier management, the study resulted in the conclusion that the company should consider engaging more in carrier development and working towards beneficial and effective relationships. Otherwise, no major changes were recommended for the case company processes.