977 resultados para Worst-case dimensioning


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modelling the fundamental performance limits of wireless sensor networks (WSNs) is of paramount importance to understand the behaviour of WSN under worst case conditions and to make the appropriate design choices. In that direction, this paper contributes with a methodology for modelling cluster tree WSNs with a mobile sink. We propose closed form recurrent expressions for computing the worst case end to end delays, buffering and bandwidth requirements across any source-destination path in the cluster tree assuming error free channel. We show how to apply our theoretical results to the specific case of IEEE 802.15.4/ZigBee WSNs. Finally, we demonstrate the validity and analyze the accuracy of our methodology through a comprehensive experimental study, therefore validating the theoretical results through experimentation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2. © Author(s) 2015.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado em Gestão Integrada da Qualidade, Ambiente e Segurança

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accepted in 13th IEEE Symposium on Embedded Systems for Real-Time Multimedia (ESTIMedia 2015), Amsterdam, Netherlands.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

6th Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

27th Euromicro Conference on Real-Time Systems (ECRTS 2015), Lund, Sweden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Presented at 23rd International Conference on Real-Time Networks and Systems (RTNS 2015). 4 to 6, Nov, 2015, Main Track. Lille, France.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Because of the scientific evidence showing that arsenic (As), cadmium (Cd), and nickel (Ni) are human genotoxic carcinogens, the European Union (EU) recently set target values for metal concentration in ambient air (As: 6 ng/m3, Cd: 5 ng/m3, Ni: 20 ng/m3). The aim of our study was to determine the concentration levels of these trace elements in Porto Metropolitan Area (PMA) in order to assess whether compliance was occurring with these new EU air quality standards. Fine (PM2.5) and inhalable (PM10) air particles were collected from October 2011 to July 2012 at two different (urban and suburban) locations in PMA. Samples were analyzed for trace elements content by inductively coupled plasma–mass spectrometry (ICP-MS). The study focused on determination of differences in trace elements concentration between the two sites, and between PM2.5 and PM10, in order to gather information regarding emission sources. Except for chromium (Cr), the concentration of all trace elements was higher at the urban site. However, results for As, Cd, Ni, and lead (Pb) were well below the EU limit/target values (As: 1.49 ± 0.71 ng/m3; Cd: 1.67 ± 0.92 ng/m3; Ni: 3.43 ± 3.23 ng/m3; Pb: 17.1 ± 10.1 ng/m3) in the worst-case scenario. Arsenic, Cd, Ni, Pb, antimony (Sb), selenium (Se), vanadium (V), and zinc (Zn) were predominantly associated to PM2.5, indicating that anthropogenic sources such as industry and road traffic are the main source of these elements. High enrichment factors (EF > 100) were obtained for As, Cd, Pb, Sb, Se, and Zn, further confirming their anthropogenic origin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Lógica Computacional

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Potential risks of a secondary formation of polychlorinated dibenzodioxins/furans (PCDD/Fs) were assessed for two cordierite-based, wall-through diesel particulate filters (DPFs) for which soot combustion was either catalyzed with an iron- or a copper-based fuel additive. A heavy duty diesel engine was used as test platform, applying the eight-stage ISO 8178/4 C1 cycle. DPF applications neither affected the engine performance, nor did they increase NO, NO2, CO, and CO2 emissions. The latter is a metric for fuel consumption. THC emissions decreased by about 40% when deploying DPFs. PCDD/F emissions, with a focus on tetra- to octachlorinated congeners, were compared under standard and worst case conditions (enhanced chlorine uptake). The iron-catalyzed DPF neither increased PCDD/F emissions, nor did it change the congener pattern, even when traces of chlorine became available. In case of copper, PCDD/F emissions increased by up to 3 orders of magnitude from 22 to 200 to 12 700 pg I-TEQ/L with fuels of < 2, 14, and 110 microg/g chlorine, respectively. Mainly lower chlorinated DD/Fs were formed. Based on these substantial effects on PCDD/F emissions, the copper-catalyzed DPF system was not approved for workplace applications, whereas the iron system fulfilled all the specifications of the Swiss procedures for DPF approval (VERT).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Critical real-time ebedded (CRTE) Systems require safe and tight worst-case execution time (WCET) estimations to provide required safety levels and keep costs low. However, CRTE Systems require increasing performance to satisfy performance needs of existing and new features. Such performance can be only achieved by means of more agressive hardware architectures, which are much harder to analyze from a WCET perspective. The main features considered include cache memòries and multi-core processors.Thus, althoug such features provide higher performance, corrent WCET analysis methods are unable to provide tight WCET estimations. In fact, WCET estimations become worse than for simple rand less powerful hardware. The main reason is the fact that hardware behavior is deterministic but unknown and, therefore, the worst-case behavior must be assumed most of the time, leading to large WCET estimations. The purpose of this project is developing new hardware designs together with WCET analysis tools able to provide tight and safe WCET estimations. In order to do so, those pieces of hardware whose behavior is not easily analyzable due to lack of accurate information during WCET analysis will be enhanced to produce a probabilistically analyzable behavior. Thus, even if the worst-case behavior cannot be removed, its probabilty can be bounded, and hence, a safe and tight WCET can be provided for a particular safety level in line with the safety levels of the remaining components of the system. During the first year the project we have developed molt of the evaluation infraestructure as well as the techniques hardware techniques to analyze cache memories. During the second year those techniques have been evaluated, and new purely-softwar techniques have been developed.