951 resultados para Worst-case execution-time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we present an analytical direct method, based on a Numerov three-point scheme, which is sixth order accurate and has a linear execution time on the grid dimension, to solve the discrete one-dimensional Poisson equation with Dirichlet boundary conditions. Our results should improve numerical codes used mainly in self-consistent calculations in solid state physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Specific choices about how to represent complex networks can have a substantial impact on the execution time required for the respective construction and analysis of those structures. In this work we report a comparison of the effects of representing complex networks statically by adjacency matrices or dynamically by adjacency lists. Three theoretical models of complex networks are considered: two types of Erdos-Renyi as well as the Barabasi-Albert model. We investigated the effect of the different representations with respect to the construction and measurement of several topological properties (i.e. degree, clustering coefficient, shortest path length, and betweenness centrality). We found that different forms of representation generally have a substantial effect on the execution time, with the sparse representation frequently resulting in remarkably superior performance. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of scheduling a parallel program presented by a weighted directed acyclic graph (DAG) to the set of homogeneous processors for minimizing the completion time of the program has been extensively studied as academic optimization problem which occurs in optimizing the execution time of parallel algorithm with parallel computer.In this paper, we propose an application of the Ant Colony Optimization (ACO) to a multiprocessor scheduling problem (MPSP). In the MPSP, no preemption is allowed and each operation demands a setup time on the machines. The problem seeks to compose a schedule that minimizes the total completion time.We therefore rely on heuristics to find solutions since solution methods are not feasible for most problems as such. This novel heuristic searching approach to the multiprocessor based on the ACO algorithm a collection of agents cooperate to effectively explore the search space.A computational experiment is conducted on a suit of benchmark application. By comparing our algorithm result obtained to that of previous heuristic algorithm, it is evince that the ACO algorithm exhibits competitive performance with small error ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multiprocessor task graph scheduling problem has been extensively studied asacademic optimization problem which occurs in optimizing the execution time of parallelalgorithm with parallel computer. The problem is already being known as one of the NPhardproblems. There are many good approaches made with many optimizing algorithmto find out the optimum solution for this problem with less computational time. One ofthem is branch and bound algorithm.In this paper, we propose a branch and bound algorithm for the multiprocessor schedulingproblem. We investigate the algorithm by comparing two different lower bounds withtheir computational costs and the size of the pruned tree.Several experiments are made with small set of problems and results are compared indifferent sections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to achieve the high performance, we need to have an efficient scheduling of a parallelprogram onto the processors in multiprocessor systems that minimizes the entire executiontime. This problem of multiprocessor scheduling can be stated as finding a schedule for ageneral task graph to be executed on a multiprocessor system so that the schedule length can be minimize [10]. This scheduling problem is known to be NP- Hard.In multi processor task scheduling, we have a number of CPU’s on which a number of tasksare to be scheduled that the program’s execution time is minimized. According to [10], thetasks scheduling problem is a key factor for a parallel multiprocessor system to gain betterperformance. A task can be partitioned into a group of subtasks and represented as a DAG(Directed Acyclic Graph), so the problem can be stated as finding a schedule for a DAG to beexecuted in a parallel multiprocessor system so that the schedule can be minimized. Thishelps to reduce processing time and increase processor utilization. The aim of this thesis workis to check and compare the results obtained by Bee Colony algorithm with already generatedbest known results in multi processor task scheduling domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis aims to present a color segmentation approach for traffic sign recognition based on LVQ neural networks. The RGB images were converted into HSV color space, and segmented using LVQ depending on the hue and saturation values of each pixel in the HSV color space. LVQ neural network was used to segment red, blue and yellow colors on the road and traffic signs to detect and recognize them. LVQ was effectively applied to 536 sampled images taken from different countries in different conditions with 89% accuracy and the execution time of each image among 31 images was calculated in between 0.726sec to 0.844sec. The method was tested in different environmental conditions and LVQ showed its capacity to reasonably segment color despite remarkable illumination differences. The results showed high robustness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Very large scale computations are now becoming routinely used as a methodology to undertake scientific research. In this context, `provenance systems' are regarded as the equivalent of the scientist's logbook for in silico experimentation: provenance captures the documentation of the process that led to some result. Using a protein compressibility analysis application, we derive a set of generic use cases for a provenance system. In order to support these, we address the following fundamental questions: what is provenance? how to record it? what is the performance impact for grid execution? what is the performance of reasoning? In doing so, we define a technologyindependent notion of provenance that captures interactions between components, internal component information and grouping of interactions, so as to allow us to analyse and reason about the execution of scientific processes. In order to support persistent provenance in heterogeneous applications, we introduce a separate provenance store, in which provenance documentation can be stored, archived and queried independently of the technology used to run the application. Through a series of practical tests, we evaluate the performance impact of such a provenance system. In summary, we demonstrate that provenance recording overhead of our prototype system remains under 10% of execution time, and we show that the recorded information successfully supports our use cases in a performant manner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic composition of services provides the ability to build complex distributed applications at run time by combining existing services, thus coping with a large variety of complex requirements that cannot be met by individual services alone. However, with the increasing amount of available services that differ in granularity (amount of functionality provided) and qualities, selecting the best combination of services becomes very complex. In response, this paper addresses the challenges of service selection, and makes a twofold contribution. First, a rich representation of compositional planning knowledge is provided, allowing the expression of multiple decompositions of tasks at arbitrary levels of granularity. Second, two distinct search space reduction techniques are introduced, the application of which, prior to performing service selection, results in significant improvement in selection performance in terms of execution time, which is demonstrated via experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho investigamos a formação de network considerando agentes cautelosos. O modelo consiste em duas regiões com (n/2) bancos em cada, onde a interligação entre eles ocorre através e depósitos interbancários. Cada banco está sujeito a corrida bancária, ou devido a um choque negativo de agentes impacientes, ou devido a contaminação da corrida de um banco pertencente a infraestrutura bancária. Os bancos podem tentar eliminar a possibilidade de contágio ao fazer um número alto de inter-ligações. Para isso, é necessário uma coordenação entre todos os bancos. Se um banco não se prevenir de um contágio, ele impõe a todos os outros a possibilidade de contágio no pior cenário. Há duas regiões bem definidas de equilíbrio de nash simétrico com network estável, uma na qual todos os bancos se previnem do cenário de contágio no pior cenário e a outra na qual nenhum banco se previne. Devido ao problema de coordenação, o equilíbrio com contágio no pior cenário pode ocorrer mesmo sendo pareto dominado pelo equilíbrio sem contágio. Sob certas condições, o equilíbrio com contágio ocorre com um network pareto eficiente. Neste caso o network eficiente é diferente do network mais resiliente ao contágio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Brazilian Northeast is the most vulnerable region to climatic variability risks. For the Brazilian semi-arid is expected a reduction in the overall rates of precipitation and an increase in the number of dry days. These changes predicted by the IPCC (2007) will intensify the rainfall and droughts period that could promote the dominance of cyanobacteria, thus affecting the water quality of reservoirs, that are most used for water supply, in the semi-arid. The aim of this study was to evaluate the effects of increasing temperature combined with nutrient enrichment on the functional structure of the phytoplankton community of a mesotrophic reservoir in the semi-arid, in the worst case scenario of climate change predicted by the IPCC (2007). Two experiments were performed, one in a rainy season and another in the dry season. In the water sampled, nutrients (nitrate and orthophosphate) were added in different concentrations. The microcosms were submitted to two different temperatures, five-year average of air temperature in the reservoir (control) and 4°C above the control temperature (warming). The results of this study showed that warming and nutrient enrichment benefited mainly the functional groups of cyanobacteria. During the rainy season it was verified the increasing biomass of small functional groups of unicellular and opportunists algae such as F (colonial green algae with mucilage) and X1 (nanoplanktonic algae of eutrophic lake systems). It was also observed an increasing in total biomass, in the richness and diversity of the community. In the dry season experiment there was a greater contribution in the relative biomass of filamentous algae, with a replacement of the group S1 (non-filamentous cyanobacteria with heterocytes) for H1 (filamentous cyanobacteria with heterocytes) in nutrient- enriched treatments. Moreover, there was also loss in total biomass, species richness and diversity of the community. The effects of temperature and nutrients manipulation on phytoplankton community of reservoir Ministro João Alves provoked changes in species richness, the diversity of the community and its functional composition, being the dry period which showed the highest susceptibility to the increase in the contribution of potentially toxic cyanobacteria with heterocytes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study intended to evaluate the maze test accuracy in cognitive deficit screening in elderly with or without neuropsychological pathology. The sample included 40 healthy young (18-25 years old; mean- 21 ± 1.6), 40 healthy old (60-77 years old; mean- 67 ± 5.1) and 18 patients with probable diagnosis of Alzheimer s disease initial stage (52-90 years old; mean- 78 ± 9.2). Data analysis was made using Anova with Tukey s post hoc, multiple linear regression analysis and ROC curve analysis. According to Tukey s test Alzheimer patients spent more time (46843 ± 37926 ms) to execute the test than healthy young (5482 ± 2873 ms; p= 0.0001) and elderly (17978 ± 13700; p= 0.0001); healthy young executed test n lower time (p= 0.035). According to the regression analysis of age, education level and cognitive performance of the three groups, the cognitive performance was the predictor of the execution time. When analyzing young and elderly only age was the predictor and the cognitive performance was the only factor to influence the test of old aged healthy and patients. The ROC curve analysis indicated 72% accuracy for young and elderly and 36% for healthy and elderly patients. The maze execution time represented a better balance between sensibility (75%) and the specificity (61%) was near 13575 ms, indicating that those subjects that execute the maze in a time higher to this value may show cognitive deficit related to the executive function. According to the results it is suggested that the maze test used in this study shows a good accuracy in the cognitive deficit tracking and may discriminate age changes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial neural networks are usually applied to solve complex problems. In problems with more complexity, by increasing the number of layers and neurons, it is possible to achieve greater functional efficiency. Nevertheless, this leads to a greater computational effort. The response time is an important factor in the decision to use neural networks in some systems. Many argue that the computational cost is higher in the training period. However, this phase is held only once. Once the network trained, it is necessary to use the existing computational resources efficiently. In the multicore era, the problem boils down to efficient use of all available processing cores. However, it is necessary to consider the overhead of parallel computing. In this sense, this paper proposes a modular structure that proved to be more suitable for parallel implementations. It is proposed to parallelize the feedforward process of an RNA-type MLP, implemented with OpenMP on a shared memory computer architecture. The research consistes on testing and analizing execution times. Speedup, efficiency and parallel scalability are analyzed. In the proposed approach, by reducing the number of connections between remote neurons, the response time of the network decreases and, consequently, so does the total execution time. The time required for communication and synchronization is directly linked to the number of remote neurons in the network, and so it is necessary to investigate which one is the best distribution of remote connections