19 resultados para Worst Case Execution Time (WCET)
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
Large efforts have been maden by the scientific community on tasks involving locomotion of mobile robots. To execute this kind of task, we must develop to the robot the ability of navigation through the environment in a safe way, that is, without collisions with the objects. In order to perform this, it is necessary to implement strategies that makes possible to detect obstacles. In this work, we deal with this problem by proposing a system that is able to collect sensory information and to estimate the possibility for obstacles to occur in the mobile robot path. Stereo cameras positioned in parallel to each other in a structure coupled to the robot are employed as the main sensory device, making possible the generation of a disparity map. Code optimizations and a strategy for data reduction and abstraction are applied to the images, resulting in a substantial gain in the execution time. This makes possible to the high level decision processes to execute obstacle deviation in real time. This system can be employed in situations where the robot is remotely operated, as well as in situations where it depends only on itself to generate trajectories (the autonomous case)
Resumo:
This work deals with an on-line control strategy based on Robust Model Predictive Control (RMPC) technique applied in a real coupled tanks system. This process consists of two coupled tanks and a pump to feed the liquid to the system. The control objective (regulator problem) is to keep the tanks levels in the considered operation point even in the presence of disturbance. The RMPC is a technique that allows explicit incorporation of the plant uncertainty in the problem formulation. The goal is to design, at each time step, a state-feedback control law that minimizes a 'worst-case' infinite horizon objective function, subject to constraint in the control. The existence of a feedback control law satisfying the input constraints is reduced to a convex optimization over linear matrix inequalities (LMIs) problem. It is shown in this work that for the plant uncertainty described by the polytope, the feasible receding horizon state feedback control design is robustly stabilizing. The software implementation of the RMPC is made using Scilab, and its communication with Coupled Tanks Systems is done through the OLE for Process Control (OPC) industrial protocol
Resumo:
In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.
Resumo:
The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions
Resumo:
The work proposed by Cleverton Hentz (2010) presented an approach to define tests from the formal description of a program s input. Since some programs, such as compilers, may have their inputs formalized through grammars, it is common to use context-free grammars to specify the set of its valid entries. In the original work the author developed a tool that automatically generates tests for compilers. In the present work we identify types of problems in various areas where grammars are used to describe them , for example, to specify software configurations, which are potential situations to use LGen. In addition, we conducted case studies with grammars of different domains and from these studies it was possible to evaluate the behavior and performance of LGen during the generation of sentences, evaluating aspects such as execution time, number of generated sentences and satisfaction of coverage criteria available in LGen
Resumo:
The Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease characterized by progressive muscle weakness that leads the patient to death, usually due to respiratory complications. Thus, as the disease progresses the patient will require noninvasive ventilation (NIV) and constant monitoring. This paper presents a distributed architecture for homecare monitoring of nocturnal NIV in patients with ALS. The implementation of this architecture used single board computers and mobile devices placed in patient’s homes, to display alert messages for caregivers and a web server for remote monitoring by the healthcare staff. The architecture used a software based on fuzzy logic and computer vision to capture data from a mechanical ventilator screen and generate alert messages with instructions for caregivers. The monitoring was performed on 29 patients for 7 con-tinuous hours daily during 5 days generating a total of 126000 samples for each variable monitored at a sampling rate of one sample per second. The system was evaluated regarding the rate of hits for character recognition and its correction through an algorithm for the detection and correction of errors. Furthermore, a healthcare team evaluated regarding the time intervals at which the alert messages were generated and the correctness of such messages. Thus, the system showed an average hit rate of 98.72%, and in the worst case 98.39%. As for the message to be generated, the system also agreed 100% to the overall assessment, and there was disagreement in only 2 cases with one of the physician evaluators.
Resumo:
The Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease characterized by progressive muscle weakness that leads the patient to death, usually due to respiratory complications. Thus, as the disease progresses the patient will require noninvasive ventilation (NIV) and constant monitoring. This paper presents a distributed architecture for homecare monitoring of nocturnal NIV in patients with ALS. The implementation of this architecture used single board computers and mobile devices placed in patient’s homes, to display alert messages for caregivers and a web server for remote monitoring by the healthcare staff. The architecture used a software based on fuzzy logic and computer vision to capture data from a mechanical ventilator screen and generate alert messages with instructions for caregivers. The monitoring was performed on 29 patients for 7 con-tinuous hours daily during 5 days generating a total of 126000 samples for each variable monitored at a sampling rate of one sample per second. The system was evaluated regarding the rate of hits for character recognition and its correction through an algorithm for the detection and correction of errors. Furthermore, a healthcare team evaluated regarding the time intervals at which the alert messages were generated and the correctness of such messages. Thus, the system showed an average hit rate of 98.72%, and in the worst case 98.39%. As for the message to be generated, the system also agreed 100% to the overall assessment, and there was disagreement in only 2 cases with one of the physician evaluators.
Resumo:
The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.
Resumo:
The Brazilian Northeast is the most vulnerable region to climatic variability risks. For the Brazilian semi-arid is expected a reduction in the overall rates of precipitation and an increase in the number of dry days. These changes predicted by the IPCC (2007) will intensify the rainfall and droughts period that could promote the dominance of cyanobacteria, thus affecting the water quality of reservoirs, that are most used for water supply, in the semi-arid. The aim of this study was to evaluate the effects of increasing temperature combined with nutrient enrichment on the functional structure of the phytoplankton community of a mesotrophic reservoir in the semi-arid, in the worst case scenario of climate change predicted by the IPCC (2007). Two experiments were performed, one in a rainy season and another in the dry season. In the water sampled, nutrients (nitrate and orthophosphate) were added in different concentrations. The microcosms were submitted to two different temperatures, five-year average of air temperature in the reservoir (control) and 4°C above the control temperature (warming). The results of this study showed that warming and nutrient enrichment benefited mainly the functional groups of cyanobacteria. During the rainy season it was verified the increasing biomass of small functional groups of unicellular and opportunists algae such as F (colonial green algae with mucilage) and X1 (nanoplanktonic algae of eutrophic lake systems). It was also observed an increasing in total biomass, in the richness and diversity of the community. In the dry season experiment there was a greater contribution in the relative biomass of filamentous algae, with a replacement of the group S1 (non-filamentous cyanobacteria with heterocytes) for H1 (filamentous cyanobacteria with heterocytes) in nutrient- enriched treatments. Moreover, there was also loss in total biomass, species richness and diversity of the community. The effects of temperature and nutrients manipulation on phytoplankton community of reservoir Ministro João Alves provoked changes in species richness, the diversity of the community and its functional composition, being the dry period which showed the highest susceptibility to the increase in the contribution of potentially toxic cyanobacteria with heterocytes
Resumo:
This study intended to evaluate the maze test accuracy in cognitive deficit screening in elderly with or without neuropsychological pathology. The sample included 40 healthy young (18-25 years old; mean- 21 ± 1.6), 40 healthy old (60-77 years old; mean- 67 ± 5.1) and 18 patients with probable diagnosis of Alzheimer s disease initial stage (52-90 years old; mean- 78 ± 9.2). Data analysis was made using Anova with Tukey s post hoc, multiple linear regression analysis and ROC curve analysis. According to Tukey s test Alzheimer patients spent more time (46843 ± 37926 ms) to execute the test than healthy young (5482 ± 2873 ms; p= 0.0001) and elderly (17978 ± 13700; p= 0.0001); healthy young executed test n lower time (p= 0.035). According to the regression analysis of age, education level and cognitive performance of the three groups, the cognitive performance was the predictor of the execution time. When analyzing young and elderly only age was the predictor and the cognitive performance was the only factor to influence the test of old aged healthy and patients. The ROC curve analysis indicated 72% accuracy for young and elderly and 36% for healthy and elderly patients. The maze execution time represented a better balance between sensibility (75%) and the specificity (61%) was near 13575 ms, indicating that those subjects that execute the maze in a time higher to this value may show cognitive deficit related to the executive function. According to the results it is suggested that the maze test used in this study shows a good accuracy in the cognitive deficit tracking and may discriminate age changes
Resumo:
Artificial neural networks are usually applied to solve complex problems. In problems with more complexity, by increasing the number of layers and neurons, it is possible to achieve greater functional efficiency. Nevertheless, this leads to a greater computational effort. The response time is an important factor in the decision to use neural networks in some systems. Many argue that the computational cost is higher in the training period. However, this phase is held only once. Once the network trained, it is necessary to use the existing computational resources efficiently. In the multicore era, the problem boils down to efficient use of all available processing cores. However, it is necessary to consider the overhead of parallel computing. In this sense, this paper proposes a modular structure that proved to be more suitable for parallel implementations. It is proposed to parallelize the feedforward process of an RNA-type MLP, implemented with OpenMP on a shared memory computer architecture. The research consistes on testing and analizing execution times. Speedup, efficiency and parallel scalability are analyzed. In the proposed approach, by reducing the number of connections between remote neurons, the response time of the network decreases and, consequently, so does the total execution time. The time required for communication and synchronization is directly linked to the number of remote neurons in the network, and so it is necessary to investigate which one is the best distribution of remote connections
Resumo:
This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency
Resumo:
The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform
Resumo:
The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.