932 resultados para efficient algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we present the explicit series solution of a specific mathematical model from the literature, the Deng bursting model, that mimics the glucose-induced electrical activity of pancreatic beta-cells (Deng, 1993). To serve to this purpose, we use a technique developed to find analytic approximate solutions for strongly nonlinear problems. This analytical algorithm involves an auxiliary parameter which provides us with an efficient way to ensure the rapid and accurate convergence to the exact solution of the bursting model. By using the homotopy solution, we investigate the dynamical effect of a biologically meaningful bifurcation parameter rho, which increases with the glucose concentration. Our analytical results are found to be in excellent agreement with the numerical ones. This work provides an illustration of how our understanding of biophysically motivated models can be directly enhanced by the application of a newly analytic method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a step count algorithm designed to work in real-time using low computational power. This proposal is our first step for the development of an indoor navigation system, based on Pedestrian Dead Reckoning (PDR). We present two approaches to solve this problem and compare them based in their error on step counting, as well as, the capability of their use in a real time system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an ankle mounted Inertial Navigation System (INS) used to estimate the distance traveled by a pedestrian. This distance is estimated by the number of steps given by the user. The proposed method is based on force sensors to enhance the results obtained from an INS. Experimental results have shown that, depending on the step frequency, the traveled distance error varies between 2.7% and 5.6%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Mecânica – Gestão Industrial

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reaction between 2-aminobenzenesulfonic acid and 2-hydroxy-3-methoxybenzaldehyde produces the acyclic Schiff base 2-[(2-hydroxy-3-methoxyphenyl) methylideneamino] benzenesulfonic acid (H2L center dot 3H(2)O) (1). In situ reactions of this compound with Cu(II) salts and, eventually, in the presence of pyridine (py) or 2,2'-bipyridine (2,2'-bipy) lead to the formation of the mononuclear complexes [CuL(H2O)(2)] (2) and [CuL(2,2'-bipy)]center dot DMF center dot H2O (3) and the diphenoxo-bridged dicopper compounds [CuL(py)](2) (4) and [CuL(EtOH)](2)center dot 2H(2)O (5). In 2-5 the L-2-ligand acts as a tridentate chelating species by means of one of the O-sulfonate atoms, the O-phenoxo and the N-atoms. The remaining coordination sites are then occupied by H2O (in 2), 2,2'-bipyridine (in 3), pyridine (in 4) or EtOH (in 5). Hydrogen bond interactions resulted in R-2(2) (14) and in R-4(4)(12) graph sets leading to dimeric species (in 2 and 3, respectively), 1D chain associations (in 2 and 5) or a 2D network (1). Complexes 2-5 are applied as selective catalysts for the homogeneous peroxidative (with tert-butylhydroperoxide, TBHP) oxidation of primary and secondary alcohols, under solvent-and additive-free conditions and under low power microwave (MW) irradiation. A quantitative yield of acetophenone was obtained by oxidation of 1-phenylethanol with compound 4 [TOFs up to 7.6 x 10(3) h(-1)] after 20 min of MW irradiation, whereas the oxidation of benzyl alcohol to benzaldehyde is less effective (TOF 992 h(-1)). The selectivity of 4 to oxidize the alcohol relative to the ene function is demonstrated when using cinnamyl alcohol as substrate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A otimização nos sistemas de suporte à decisão atuais assume um carácter fortemente interdisciplinar relacionando-se com a necessidade de integração de diferentes técnicas e paradigmas na resolução de problemas reais complexos, sendo que a computação de soluções ótimas em muitos destes problemas é intratável. Os métodos de pesquisa heurística são conhecidos por permitir obter bons resultados num intervalo temporal aceitável. Muitas vezes, necessitam que a parametrização seja ajustada de forma a permitir obter bons resultados. Neste sentido, as estratégias de aprendizagem podem incrementar o desempenho de um sistema, dotando-o com a capacidade de aprendizagem, por exemplo, qual a técnica de otimização mais adequada para a resolução de uma classe particular de problemas, ou qual a parametrização mais adequada de um dado algoritmo num determinado cenário. Alguns dos métodos de otimização mais usados para a resolução de problemas do mundo real resultaram da adaptação de ideias de várias áreas de investigação, principalmente com inspiração na natureza - Meta-heurísticas. O processo de seleção de uma Meta-heurística para a resolução de um dado problema é em si um problema de otimização. As Híper-heurísticas surgem neste contexto como metodologias eficientes para selecionar ou gerar heurísticas (ou Meta-heurísticas) na resolução de problemas de otimização NP-difícil. Nesta dissertação pretende-se dar uma contribuição para o problema de seleção de Metaheurísticas respetiva parametrização. Neste sentido é descrita a especificação de uma Híperheurística para a seleção de técnicas baseadas na natureza, na resolução do problema de escalonamento de tarefas em sistemas de fabrico, com base em experiência anterior. O módulo de Híper-heurística desenvolvido utiliza um algoritmo de aprendizagem por reforço (QLearning), que permite dotar o sistema da capacidade de seleção automática da Metaheurística a usar no processo de otimização, assim como a respetiva parametrização. Finalmente, procede-se à realização de testes computacionais para avaliar a influência da Híper- Heurística no desempenho do sistema de escalonamento AutoDynAgents. Como conclusão genérica, é possível afirmar que, dos resultados obtidos é possível concluir existir vantagem significativa no desempenho do sistema quando introduzida a Híper-heurística baseada em QLearning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sustentabilidade energética do planeta é uma preocupação corrente e, neste sentido, a eficiência energética afigura-se como sendo essencial para a redução do consumo em todos os setores de atividade. No que diz respeito ao setor residencial, o indevido comportamento dos utilizadores aliado ao desconhecimento do consumo dos diversos aparelhos, são factores impeditivos para a redução do consumo energético. Uma ferramenta importante, neste sentido, é a monitorização de consumos nomeadamente a monitorização não intrusiva, que apresenta vantagens económicas relativamente à monitorização intrusiva, embora levante alguns desafios na desagregação de cargas. Abordou-se então, neste documento, a temática da monitorização não intrusiva onde se desenvolveu uma ferramenta de desagregação de cargas residenciais, sobretudo de aparelhos que apresentavam elevados consumos. Para isso, monitorizaram-se os consumos agregados de energia elétrica, água e gás de seis habitações do município de Vila Nova de Gaia. Através da incorporação dos vetores de água e gás, a acrescentar ao da energia elétrica, provou-se que a performance do algoritmo de desagregação de aparelhos poderá aumentar, no caso de aparelhos que utilizem simultaneamente energia elétrica e água ou energia elétrica e gás. A eficiência energética é também parte constituinte deste trabalho e, para tal, implementaram-se medidas de eficiência energética para uma das habitações em estudo, de forma a concluir as que exibiam maior potencial de poupança, assim como rápidos períodos de retorno de investimento. De um modo geral, os objetivos propostos foram alcançados e espera-se que num futuro próximo, a monitorização de consumos não intrusiva se apresente como uma solução de referência no que respeita à sustentabilidade energética do setor residencial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores