951 resultados para Worst-case execution-time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

WorldFIP is standardised as European Norm EN 50170 - General Purpose Field Communication System. Field communication systems (fieldbuses) started to be widely used as the communication support for distributed computer-controlled systems (DCCS), and are being used in all sorts of process control and manufacturing applications within different types of industries. There are several advantages in using fieldbuses as a replacement of for the traditional point-to-point links between sensors/actuators and computer-based control systems. Indeed they concern economical ones (cable savings) but, importantly, fieldbuses allow an increased decentralisation and distribution of the processing power over the field. Typically DCCS have real-time requirements that must be fulfilled. By this, we mean that process data must be transferred between network computing nodes within a maximum admissible time span. WorldFIP has very interesting mechanisms to schedule data transfers. It explicit distinguishes to types of traffic: periodic and aperiodic. In this paper we describe how WorldFIP handles these two types of traffic, and more importantly, we provide a comprehensive analysis for guaranteeing the real-time requirements of both types of traffic. A major contribution is made in the analysis of worst-case response time of aperiodic transfer requests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we address the ability of WorldFIP to cope with the real-time requirements of distributed computer-controlled systems (DCCS). Typical DCCS include process variables that must be transferred between network devices both in a periodic and sporadic (aperiodic) basis. The WorldFIP protocol is designed to support both types of traffic. WorldFIP can easily guarantee the timing requirements for the periodic traffic. However, for the aperiodic traffic more complex analysis must be made in order to guarantee its timing requirements. This paper describes work that is being carried out to extend previous relevant work, in order to include the actual schedule for the periodic traffic in the worst-case response time analysis of sporadic traffic in WorldFIP networks

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contention on the memory bus in COTS based multicore systems is becoming a major determining factor of the execution time of a task. Analyzing this extra execution time is non-trivial because (i) bus arbitration protocols in such systems are often undocumented and (ii) the times when the memory bus is requested to be used are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. We present a method for finding an upper bound on the extra execution time of a task due to contention on the memory bus in COTS based multicore systems. This method makes no assumptions on the bus arbitration protocol (other than assuming that it is work-conserving).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

27th Euromicro Conference on Real-Time Systems (ECRTS 2015), Lund, Sweden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The heat waves of 2003 in Western Europe and 2010 in Russia, commonly labelled as rare climatic anomalies outside of previous experience, are often taken as harbingers of more frequent extremes in the global warming-influenced future. However, a recent reconstruction of spring–summer temperatures for WE resulted in the likelihood of significantly higher temperatures in 1540. In order to check the plausibility of this result we investigated the severity of the 1540 drought by putting forward the argument of the known soil desiccation-temperature feedback. Based on more than 300 first-hand documentary weather report sources originating from an area of 2 to 3 million km2, we show that Europe was affected by an unprecedented 11-month-long Megadrought. The estimated number of precipitation days and precipitation amount for Central and Western Europe in 1540 is significantly lower than the 100-year minima of the instrumental measurement period for spring, summer and autumn. This result is supported by independent documentary evidence about extremely low river flows and Europe-wide wild-, forest- and settlement fires. We found that an event of this severity cannot be simulated by state-of-the-art climate models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract machines provide a certain separation between platformdependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the speciflc abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, programindependent proflling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/veriflcation of time properties, certiflcation of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicting statically the running time of programs has many applications ranging from task scheduling in parallel execution to proving the ability of a program to meet strict time constraints. A starting point in order to attack this problem is to infer the computational complexity of such programs (or fragments thereof). This is one of the reasons why the development of static analysis techniques for inferring cost-related properties of programs (usually upper and/or lower bounds of actual costs) has received considerable attention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.