951 resultados para Upper bound estimate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In embedded systems, the timing behaviour of the control mechanisms are sometimes of critical importance for the operational safety. These high criticality systems require strict compliance with the offline predicted task execution time. The execution of a task when subject to preemption may vary significantly in comparison to its non-preemptive execution. Hence, when preemptive scheduling is required to operate the workload, preemption delay estimation is of paramount importance. In this paper a preemption delay estimation method for floating non-preemptive scheduling policies is presented. This work builds on [1], extending the model and optimising it considerably. The preemption delay function is subject to a major tightness improvement, considering the WCET analysis context. Moreover more information is provided as well in the form of an extrinsic cache misses function, which enables the method to provide a solution in situations where the non-preemptive regions sizes are small. Finally experimental results from the implementation of the proposed solutions in Heptane are provided for real benchmarks which validate the significance of this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contention on the memory bus in COTS based multicore systems is becoming a major determining factor of the execution time of a task. Analyzing this extra execution time is non-trivial because (i) bus arbitration protocols in such systems are often undocumented and (ii) the times when the memory bus is requested to be used are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. We present a method for finding an upper bound on the extra execution time of a task due to contention on the memory bus in COTS based multicore systems. This method makes no assumptions on the bus arbitration protocol (other than assuming that it is work-conserving).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graphics processor units (GPUs) today can be used for computations that go beyond graphics and such use can attain a performance that is orders of magnitude greater than a normal processor. The software executing on a graphics processor is composed of a set of (often thousands of) threads which operate on different parts of the data and thereby jointly compute a result which is delivered to another thread executing on the main processor. Hence the response time of a thread executing on the main processor is dependent on the finishing time of the execution of threads executing on the GPU. Therefore, we present a simple method for calculating an upper bound on the finishing time of threads executing on a GPU, in particular NVIDIA Fermi. Developing such a method is nontrivial because threads executing on a GPU share hardware resources at very fine granularity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

5th Brazilian Symposium on Computing Systems Engineering, SBESC 2015 (SBESC 2015). 3 to 6, Nov, 2015. Foz do Iguaçu, Brasil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vegeu el resum a l'inici del document del fitxer adjunt

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del document del fitxer adjunt."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del document del fitxer adjunt."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How much would output increase if underdeveloped economies were toincrease their levels of schooling? We contribute to the development accounting literature by describing a non-parametric upper bound on theincrease in output that can be generated by more schooling. The advantage of our approach is that the upper bound is valid for any number ofschooling levels with arbitrary patterns of substitution/complementarity.Another advantage is that the upper bound is robust to certain forms ofendogenous technology response to changes in schooling. We also quantify the upper bound for all economies with the necessary data, compareour results with the standard development accounting approach, andprovide an update on the results using the standard approach for a largesample of countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Tokai to Kamioka (T2K) long-baseline neutrino experiment consists of a muon neutrino beam, produced at the J-PARC accelerator, a near detector complex and a large 295 km distant far detector. The present work utilizes the T2K event timing measurements at the near and far detectors to study neutrino time of flight as function of derived neutrino energy. Under the assumption of a relativistic relation between energy and time of flight, constraints on the neutrino rest mass can be derived. The sub-GeV neutrino beam in conjunction with timing precision of order tens of ns provide sensitivity to neutrino mass in the few MeV/c^2 range. We study the distribution of relative arrival times of muon and electron neutrino candidate events at the T2K far detector as a function of neutrino energy. The 90% C.L. upper limit on the mixture of neutrino mass eigenstates represented in the data sample is found to be m^2 < 5.6 MeV^2/c^4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let D be a link diagram with n crossings, sA and sB be its extreme states and |sAD| (respectively, |sBD|) be the number of simple closed curves that appear when smoothing D according to sA (respectively, sB). We give a general formula for the sum |sAD| + |sBD| for a k-almost alternating diagram D, for any k, characterizing this sum as the number of faces in an appropriate triangulation of an appropriate surface with boundary. When D is dealternator connected, the triangulation is especially simple, yielding |sAD| + |sBD| = n + 2 - 2k. This gives a simple geometric proof of the upper bound of the span of the Jones polynomial for dealternator connected diagrams, a result first obtained by Zhu [On Kauffman brackets, J. Knot Theory Ramifications6(1) (1997) 125–148.]. Another upper bound of the span of the Jones polynomial for dealternator connected and dealternator reduced diagrams, discovered historically first by Adams et al. [Almost alternating links, Topology Appl.46(2) (1992) 151–165.], is obtained as a corollary. As a new application, we prove that the Turaev genus is equal to the number k of dealternator crossings for any dealternator connected diagram

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The synthesis of nano-sized ZIF-11 with an average size of 36 ± 6 nm is reported. This material has been named nano-zeolitic imidazolate framework-11 (nZIF-11). It has the same chemical composition and thermal stability and analogous H2 and CO2 adsorption properties to the conventional microcrystalline ZIF-11 (i.e. 1.9 ± 0.9 μm). nZIF-11 has been obtained following the centrifugation route, typically used for solid separation, as a fast new technique (pioneering for MOFs) for obtaining nanomaterials where the temperature, time and rotation speed can easily be controlled. Compared to the traditional synthesis consisting of stirring + separation, the reaction time was lowered from several hours to a few minutes when using this centrifugation synthesis technique. Employing the same reaction time (2, 5 or 10 min), micro-sized ZIF-11 was obtained using the traditional synthesis while nano-scale ZIF-11 was achieved only by using centrifugation synthesis. The small particle size obtained for nZIF-11 allowed the use of the wet MOF sample as a colloidal suspension stable in chloroform. This helped to prepare mixed matrix membranes (MMMs) by direct addition of the membrane polymer (polyimide Matrimid®) to the colloidal suspension, avoiding particle agglomeration resulting from drying. The MMMs were tested for H2/CO2 separation, improving the pure polymer membrane performance, with permeation values of 95.9 Barrer of H2 and a H2/CO2 separation selectivity of 4.4 at 35 °C. When measured at 200 °C, these values increased to 535 Barrer and 9.1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"August 9, 1954"

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 05C55.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By an exponential sum of the Fourier coefficients of a holomorphic cusp form we mean the sum which is formed by first taking the Fourier series of the said form,then cutting the beginning and the tail away and considering the remaining sum on the real axis. For simplicity’s sake, typically the coefficients are normalized. However, this isn’t so important as the normalization can be done and removed simply by using partial summation. We improve the approximate functional equation for the exponential sums of the Fourier coefficients of the holomorphic cusp forms by giving an explicit upper bound for the error term appearing in the equation. The approximate functional equation is originally due to Jutila [9] and a crucial tool for transforming sums into shorter sums. This transformation changes the point of the real axis on which the sum is to be considered. We also improve known upper bounds for the size estimates of the exponential sums. For very short sums we do not obtain any better estimates than the very easy estimate obtained by multiplying the upper bound estimate for a Fourier coefficient (they are bounded by the divisor function as Deligne [2] showed) by the number of coefficients. This estimate is extremely rough as no possible cancellation is taken into account. However, with small sums, it is unclear whether there happens any remarkable amounts of cancellation.