853 resultados para parallel execution
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work presents a study about the use of standards and directions on parallel programming in distributed systems, using the MPI standard and PETSc toolkit, performing an analysis of their performances over certain mathematic operations involving matrices. The concepts are used to develop applications to solve problems involving Principal Components Analysis (PCA), which are executed in a Beowulf cluster. The results are compared to the ones of an analogous application with sequencial execution, and then it is analized if there was any performance boost on the parallel application
Resumo:
The increasing amount of sequences stored in genomic databases has become unfeasible to the sequential analysis. Then, the parallel computing brought its power to the Bioinformatics through parallel algorithms to align and analyze the sequences, providing improvements mainly in the running time of these algorithms. In many situations, the parallel strategy contributes to reducing the computational complexity of the big problems. This work shows some results obtained by an implementation of a parallel score estimating technique for the score matrix calculation stage, which is the first stage of a progressive multiple sequence alignment. The performance and quality of the parallel score estimating are compared with the results of a dynamic programming approach also implemented in parallel. This comparison shows a significant reduction of running time. Moreover, the quality of the final alignment, using the new strategy, is analyzed and compared with the quality of the approach with dynamic programming.
Resumo:
This paper presents the design of a high-speed coprocessor for Elliptic Curve Cryptography over binary Galois Field (ECC- GF(2m)). The purpose of our coprocessor is to accelerate the scalar multiplication performed over elliptic curve points represented by affine coordinates in polynomial basis. Our method consists of using elliptic curve parameters over GF(2163) in accordance with international security requirements to implement a bit-parallel coprocessor on field-programmable gate-array (FPGA). Our coprocessor performs modular inversion by using a process based on the Stein's algorithm. Results are presented and compared to results of other related works. We conclude that our coprocessor is suitable for comparing with any other ECC-hardware proposal, since its speed is comparable to projective coordinate designs.
Resumo:
Sao Paulo State Research Foundation-FAPESP
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The resistive-type superconducting fault current limiters (RSFCL) prototypes using YBCO-coated conductors have shown current limitation for medium voltage class applications for acting time up to 80 ms. By connecting an air-core reactor in parallel with the RSFCL, thus making an hybrid current limiter, one can extend the acting time for up to 1 s. In this work, we report the performance of a hybrid current limiter subjected to an AC peak fault current of 2 kA during 1 s for which within the first 80 ms the SFCL limits the current concurrently with the air-core reactor, and for the remaining 920 ms, only the air-core reactor limits the current. In order to evaluate the actual conditions for subsequent reconnection of RSFCL to the power grid, the hybrid fault current limiter was tested varying the time interval for recovery from 900 ms and 1.2 s, followed again by the concurrent operation of the hybrid limiter during 1 s (SFCL during 80 ms). From this evaluation test, the recovery time can be measured and compared using the voltage peak generated in superconducting module from the first and second fault test. The recovery time was also determined through the pulsed current method (PCM) on short-length sample test. The results showed that the fault current was limited from 1.9 kA down to 514 A after 1 cycle of 60 Hz frequency, with recovery time lower than 1.2 s for two subsequent fault current tests.
Resumo:
Research on the micro-structural characterization of metal-matrix composites uses X-ray computed tomography to collect information about the interior features of the samples, in order to elucidate their exhibited properties. The tomographic raw data needs several steps of computational processing in order to eliminate noise and interference. Our experience with a program (Tritom) that handles these questions has shown that in some cases the processing steps take a very long time and that it is not easy for a Materials Science specialist to interact with Tritom in order to define the most adequate parameter values and the proper sequence of the available processing steps. For easing the use of Tritom, a system was built which addresses the aspects described before and that is based on the OpenDX visualization system. OpenDX visualization facilities constitute a great benefit to Tritom. The visual programming environment of OpenDX allows an easy definition of a sequence of processing steps thus fulfilling the requirement of an easy use by non-specialists on Computer Science. Also the possibility of incorporating external modules in a visual OpenDX program allows the researchers to tackle the aspect of reducing the long execution time of some processing steps. The longer processing steps of Tritom have been parallelized in two different types of hardware architectures (message-passing and shared-memory); the corresponding parallel programs can be easily incorporated in a sequence of processing steps defined in an OpenDX program. The benefits of our system are illustrated through an example where the tool is applied in the study of the sensitivity to crushing – and the implications thereof – of the reinforcements used in a functionally graded syntactic metallic foam.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This long-term extension of an 8-week randomized, naturalistic study in patients with panic disorder with or without agoraphobia compared the efficacy and safety of clonazepam (n = 47) and paroxetine (n = 37) over a 3-year total treatment duration. Target doses for all patients were 2 mg/d clonazepam and 40 mg/d paroxetine (both taken at bedtime). This study reports data from the long-term period (34 months), following the initial 8-week treatment phase. Thus, total treatment duration was 36 months. Patients with a good primary outcome during acute treatment continued monotherapy with clonazepam or paroxetine, but patients with partial primary treatment success were switched to the combination therapy. At initiation of the long-term study, the mean doses of clonazepam and paroxetine were 1.9 (SD, 0.30) and 38.4 (SD, 3.74) mg/d, respectively. These doses were maintained until month 36 (clonazepam 1.9 [ SD, 0.29] mg/d and paroxetine 38.2 [SD, 3.87] mg/d). Long-term treatment with clonazepam led to a small but significantly better Clinical Global Impression (CGI)-Improvement rating than treatment with paroxetine (mean difference: CGI-Severity scale -3.48 vs -3.24, respectively, P = 0.02; CGI-Improvement scale 1.06 vs 1.11, respectively, P = 0.04). Both treatments similarly reduced the number of panic attacks and severity of anxiety. Patients treated with clonazepam had significantly fewer adverse events than those treated with paroxetine (28.9% vs 70.6%, P < 0.001). The efficacy of clonazepam and paroxetine for the treatment of panic disorder was maintained over the long-term course. There was a significant advantage with clonazepam over paroxetine with respect to the frequency and nature of adverse events.
Resumo:
In this article, we introduce two new variants of the Assembly Line Worker Assignment and Balancing Problem (ALWABP) that allow parallelization of and collaboration between heterogeneous workers. These new approaches suppose an additional level of complexity in the Line Design and Assignment process, but also higher flexibility; which may be particularly useful in practical situations where the aim is to progressively integrate slow or limited workers in conventional assembly lines. We present linear models and heuristic procedures for these two new problems. Computational results show the efficiency of the proposed approaches and the efficacy of the studied layouts in different situations. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Data visualization techniques are powerful in the handling and analysis of multivariate systems. One such technique known as parallel coordinates was used to support the diagnosis of an event, detected by a neural network-based monitoring system, in a boiler at a Brazilian Kraft pulp mill. Its attractiveness is the possibility of the visualization of several variables simultaneously. The diagnostic procedure was carried out step-by-step going through exploratory, explanatory, confirmatory, and communicative goals. This tool allowed the visualization of the boiler dynamics in an easier way, compared to commonly used univariate trend plots. In addition it facilitated analysis of other aspects, namely relationships among process variables, distinct modes of operation and discrepant data. The whole analysis revealed firstly that the period involving the detected event was associated with a transition between two distinct normal modes of operation, and secondly the presence of unusual changes in process variables at this time.
Resumo:
Consider the NP-hard problem of, given a simple graph G, to find a series-parallel subgraph of G with the maximum number of edges. The algorithm that, given a connected graph G, outputs a spanning tree of G, is a 1/2-approximation. Indeed, if n is the number of vertices in G, any spanning tree in G has n-1 edges and any series-parallel graph on n vertices has at most 2n-3 edges. We present a 7/12 -approximation for this problem and results showing the limits of our approach.