104 resultados para Response time (computer systems)

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Critical real-time ebedded (CRTE) Systems require safe and tight worst-case execution time (WCET) estimations to provide required safety levels and keep costs low. However, CRTE Systems require increasing performance to satisfy performance needs of existing and new features. Such performance can be only achieved by means of more agressive hardware architectures, which are much harder to analyze from a WCET perspective. The main features considered include cache memòries and multi-core processors.Thus, althoug such features provide higher performance, corrent WCET analysis methods are unable to provide tight WCET estimations. In fact, WCET estimations become worse than for simple rand less powerful hardware. The main reason is the fact that hardware behavior is deterministic but unknown and, therefore, the worst-case behavior must be assumed most of the time, leading to large WCET estimations. The purpose of this project is developing new hardware designs together with WCET analysis tools able to provide tight and safe WCET estimations. In order to do so, those pieces of hardware whose behavior is not easily analyzable due to lack of accurate information during WCET analysis will be enhanced to produce a probabilistically analyzable behavior. Thus, even if the worst-case behavior cannot be removed, its probabilty can be bounded, and hence, a safe and tight WCET can be provided for a particular safety level in line with the safety levels of the remaining components of the system. During the first year the project we have developed molt of the evaluation infraestructure as well as the techniques hardware techniques to analyze cache memories. During the second year those techniques have been evaluated, and new purely-softwar techniques have been developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectiu principal d'aquest projecte és avaluar la tecnologia GPU per determinar si pot ser útil en el sector de les bases de dades. En concret s'utilitza el problema específic de les consultes analítiques amb la finalitat de intentar obtenir un temps de resposta més ràpid. Per aconseguir-ho s'executa el benchmark estàndard TCP-H per poder realitzar la comparació entre tres sistemes de gestió de bases de dades CPU amb un altre implementat per GPU.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En aquest projecte es tracta de la facilitat d'ús de les aplicacions comptables, centrada en la definició d'una interfície d'usuari que faci que la utilització d'aquest tipus d'aplicacions sigui com més intuïtiu millor i permeti a l'usuari d'introduir un gran nombre d'apunts comptables en un temps limitat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En aquest projecte, es mira de reflectir la necessitat d'utilitzar l'enginyeria i la facilitat d'ús per millorar els sistemes de control en temps real que es fan servir avui dia per a controlar processos crítics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Avaluació de les capacitats d'assolir requeriments de temps real de la línia principal de nucli de Linux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resum En l’actualitat, els sistemes electrònics de processament de dades són cada cop més significatius dins del sector industrial. Són moltes les necessitats que sorgeixen en el món dels sistemes d’autentificació, de l’electrònica aeronàutica, d’equips d’emmagatzemament de dades, de telecomunicacions, etc. Aquestes necessitats tecnològiques exigeixen ser controlades per un sistema fiable, robust, totalment dependent amb els esdeveniments externs i que compleixi correctament les restriccions temporals imposades per tal de que realitzi el seu propòsit d’una manera eficient. Aquí és on entren en joc els sistemes encastats en temps real, els quals ofereixen una gran fiabilitat, disponibilitat, una ràpida resposta als esdeveniments externs del sistema, una alta garantia de funcionament i una àmplia possibilitat d’aplicacions. Aquest projecte està pensat per a fer una introducció al món dels sistemes encastats, com també explicar el funcionament del sistema operatiu en temps real FreeRTOS; el qual utilitza com a mètode de programació l’ús de tasques independents entre elles. Donarem una visió de les seves característiques de funcionament, com organitza tasques mitjançant un scheduler i uns exemples per a poder dissenyar-hi aplicacions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a recent paper, [J. M. Porrà, J. Masoliver, and K. Lindenberg, Phys. Rev. E 48, 951 (1993)], we derived the equations for the mean first-passage time for systems driven by the coin-toss square wave, a particular type of dichotomous noisy signal, to reach either one of two boundaries. The coin-toss square wave, which we here call periodic-persistent dichotomous noise, is a random signal that can only change its value at specified time points, where it changes its value with probability q or retains its previous value with probability p=1-q. These time points occur periodically at time intervals t. Here we consider the stationary version of this signal, that is, equilibrium periodic-persistent noise. We show that the mean first-passage time for systems driven by this stationary noise does not show either the discontinuities or the oscillations found in the case of nonstationary noise. We also discuss the existence of discontinuities in the mean first-passage time for random one-dimensional stochastic maps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The silicon photomultiplier (SiPM) is a novel detector technology that has undergone a fast development in the last few years, owing to its single-photon resolution and ultra-fast response time. However, the typical high dark count rates of the sensor may prevent the detection of low intensity radiation fluxes. In this article, the time-gated operation with short active periods in the nanosecond range is proposed as a solution to reduce the number of cells fired due to noise and thus increase the dynamic range. The technique is aimed at application fields that function under a trigger command, such as gated fluorescence lifetime imaging microscopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we present an integral scheduling system for non-dedicated clusters, termed CISNE-P, which ensures the performance required by the local applications, while simultaneously allocating cluster resources to parallel jobs. Our approach solves the problem efficiently by using a social contract technique. This kind of technique is based on reserving computational resources, preserving a predetermined response time to local users. CISNE-P is a middleware which includes both a previously developed space-sharing job scheduler and a dynamic coscheduling system, a time sharing scheduling component. The experimentation performed in a Linux cluster shows that these two scheduler components are complementary and a good coordination improves global performance significantly. We also compare two different CISNE-P implementations: one developed inside the kernel, and the other entirely implemented in the user space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The silicon photomultiplier (SiPM) is a novel detector technology that has undergone a fast development in the last few years, owing to its single-photon resolution and ultra-fast response time. However, the typical high dark count rates of the sensor may prevent the detection of low intensity radiation fluxes. In this article, the time-gated operation with short active periods in the nanosecond range is proposed as a solution to reduce the number of cells fired due to noise and thus increase the dynamic range. The technique is aimed at application fields that function under a trigger command, such as gated fluorescence lifetime imaging microscopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Everyday tasks seldom involve isolate actions but sequences of them. We can see whether previous actions influence the current one by exploring the response time to controlled sequences of stimuli. Specifically, depending on the response-stimulus temporal interval (RSI), different mechanisms have been proposed to explain sequential effects in two-choice serial response tasks. Whereas an automatic facilitation mechanism is thought to produce a benefit for response repetitions at short RSIs, subjective expectancies are considered to replace the automatic facilitation at longer RSIs, producing a cost-benefit pattern: repetitions are faster after other repetitions but they are slower after alternations. However, there is not direct evidence showing the impact of subjective expectancies on sequential effects. By using a fixed sequence, the results of the reported experiment showed that the repetition effect was enhanced in participants who acquired complete knowledge of the order. Nevertheless, a similar cost-benefit pattern was observed in all participants and in all learning blocks. Therefore, results of the experiment suggest that sequential effects, including the cost-benefit pattern, are the consequence of automatic mechanisms which operate independently of (and simultaneously with) explicit knowledge of the sequence or other subjective expectancies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of the Internet had a great impact on distance education and rapidly e-learning has become a killer application. Education institutions worldwide are taking advantage of the available technology in order to facilitate education to a growing audience. Everyday, more and more people use e-learning systems, environments and contents for both training and learning. E-learning promotes educationamong people that due to different reasons could not have access to education: people who could nottravel, people with very little free time, or withdisabilities, etc. As e-learning systems grow and more people are accessing them, it is necessary to consider when designing virtual environments the diverse needs and characteristics that different users have. This allows building systems that people can use easily, efficiently and effectively, where the learning process leads to a good user experience and becomes a good learning experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.