61 resultados para parallel applications
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.
Resumo:
The InteGrade project is a multi-university effort to build a novel grid computing middleware based on the opportunistic use of resources belonging to user workstations. The InteGrade middleware currently enables the execution of sequential, bag-of-tasks, and parallel applications that follow the BSP or the MPI programming models. This article presents the lessons learned over the last five years of the InteGrade development and describes the solutions achieved concerning the support for robust application execution. The contributions cover the related fields of application scheduling, execution management, and fault tolerance. We present our solutions, describing their implementation principles and evaluation through the analysis of several experimental results. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.
Resumo:
In this paper, artificial neural networks are employed in a novel approach to identify harmonic components of single-phase nonlinear load currents, whose amplitude and phase angle are subject to unpredictable changes, even in steady-state. The first six harmonic current components are identified through the variation analysis of waveform characteristics. The effectiveness of this method is tested by applying it to the model of a single-phase active power filter, dedicated to the selective compensation of harmonic current drained by an AC controller. Simulation and experimental results are presented to validate the proposed approach. (C) 2010 Elsevier B. V. All rights reserved.
Resumo:
A novel cryptography method based on the Lorenz`s attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.
Resumo:
We present parallel algorithms on the BSP/CGM model, with p processors, to count and generate all the maximal cliques of a circle graph with n vertices and m edges. To count the number of all the maximal cliques, without actually generating them, our algorithm requires O(log p) communication rounds with O(nm/p) local computation time. We also present an algorithm to generate the first maximal clique in O(log p) communication rounds with O(nm/p) local computation, and to generate each one of the subsequent maximal cliques this algorithm requires O(log p) communication rounds with O(m/p) local computation. The maximal cliques generation algorithm is based on generating all maximal paths in a directed acyclic graph, and we present an algorithm for this problem that uses O(log p) communication rounds with O(m/p) local computation for each maximal path. We also show that the presented algorithms can be extended to the CREW PRAM model.
Resumo:
Approximate Lie symmetries of the Navier-Stokes equations are used for the applications to scaling phenomenon arising in turbulence. In particular, we show that the Lie symmetries of the Euler equations are inherited by the Navier-Stokes equations in the form of approximate symmetries that allows to involve the Reynolds number dependence into scaling laws. Moreover, the optimal systems of all finite-dimensional Lie subalgebras of the approximate symmetry transformations of the Navier-Stokes are constructed. We show how the scaling groups obtained can be used to introduce the Reynolds number dependence into scaling laws explicitly for stationary parallel turbulent shear flows. This is demonstrated in the framework of a new approach to derive scaling laws based on symmetry analysis [11]-[13].
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency's technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
Colloidal particles have been used to template the electrosynthesis of several materials, such as semiconductors, metals and alloys. The method allows good control over the thickness of the resulting material by choosing the appropriate charge applied to the system, and it is able to produce high density deposited materials without shrinkage. These materials are a true model of the template structure and, due to the high surface areas obtained, are very promising for use in electrochemical applications. In the present work, the assembly of monodisperse polystyrene templates was conduced over gold, platinum and glassy carbon substrates in order to show the electrodeposition of an oxide, a conducting polymer and a hybrid inorganic-organic material with applications in the supercapacitor and sensor fields. The performances of the resulting nanostructured films have been compared with the analogue bulk material and the results achieved are depicted in this paper.
Resumo:
We describe the concept, the fabrication, and the most relevant properties of a piezoelectric-polymer system: Two fluoroethylenepropylene (FEP) films with good electret properties are laminated around a specifically designed and prepared polytetrafluoroethylene (PTFE) template at 300 degrees C. After removing the PTFE template, a two-layer FEP film with open tubular channels is obtained. For electric charging, the two-layer FEP system is subjected to a high electric field. The resulting dielectric barrier discharges inside the tubular channels yield a ferroelectret with high piezoelectricity. d(33) coefficients of up to 160 pC/N have already been achieved on the ferroelectret films. After charging at suitable elevated temperatures, the piezoelectricity is stable at temperatures of at least 130 degrees C. Advantages of the transducer films include ease of fabrication at laboratory or industrial scales, a wide range of possible geometrical and processing parameters, straightforward control of the uniformity of the polymer system, flexibility, and versatility of the soft ferroelectrets, and a large potential for device applications e.g., in the areas of biomedicine, communications, production engineering, sensor systems, environmental monitoring, etc.
Resumo:
The effects of chromium or nickel oxide additions on the composition of Portland clinker were investigated by X-ray powder diffraction associated with pattern analysis by the Rietveld method. The co-processing of industrial waste in Portland cement plants is an alternative solution to the problem of final disposal of hazardous waste. Industrial waste containing chromium or nickel is hazardous and is difficult to dispose of. It was observed that in concentrations up to 1% in mass, the chromium or nickel oxide additions do not cause significant alterations in Portland clinker composition. (C) 2008 International Centre for Diffraction Data.
Resumo:
Prince Maximilian zu Wied's great exploration of coastal Brazil in 1815-1817 resulted in important collections of reptiles, amphibians, birds, and mammals, many of which were new species later described by Wied himself The bulk of his collection was purchased for the American Museum of Natural History in 1869, although many ""type specimens"" had disappeared earlier. Wied carefully identified his localities but did not designate type specimens or type localities, which are taxonomic concepts that were not yet established. Information and manuscript names on a fraction (17 species) of his Brazilian reptiles and amphibians were transmitted by Wied to Prof. Heinrich Rudolf Schinz at the University of Zurich. Schinz included these species (credited to their discoverer ""Princ. Max."") in the second volume of Das Thierreich ... (1822). Most are junior objective synonyms of names published by Wied. However, six of the 17 names used by Schinz predate Wied's own publications. Three were manuscript names never published by Wied because he determined the species to be previously known. (1) Lacerta vittata Schinz, 1822 (a nomen oblitum) = Lacerta striata sensu Wied (a misidentification, non Linnaeus nec sensu Merrem) = Kentropyx calcarata Spix, 1825, herein qualified as a nomen protectum. (2) Polychrus virescens Schinz, 1822 = Lacerta marmorata Linnaeus, 1758 (now Polychrus marmoratus). (3) Scincus cyanurus Schinz, 1822 (a nomen oblitum) = Gymnophthalmus quadrilineatus sensu Wied (a misidentification, non Linnaeus nec sensu Merrem) = Micrablepharus maximiliani (Reinhardt and Lutken, ""1861"" [1862]), herein qualified as a nomen protectum. Qualifying Scincus cyanurus Schinz, 1822, as a nomen oblitum also removes the problem of homonymy with the later-named Pacific skink Scincus cyanurus Lesson (= Emoia cyanura). The remaining three names used by Schinz are senior objective synonyms that take priority over Wied's names. (4) Bufo cinctus Schinz, 1822, is senior to Bufo cinctus Wied, 1823; both, however, are junior synonyms of Bufo crucifer Wied, 1821 = Chaunus crucifer (Wied). (5) Agama picta Schinz, 1822, is senior to Agama picta Wied, 1823, requiring a change of authorship for this poorly known species, to be known as Enyalius pictus (Schinz). (6) Lacerta cyanomelas Schinz, 1822, predates Teius cyanomelas Wied, 1824 (1822-1831) both nomina oblita. Wied's illustration and description shows cyanomelas as apparently conspecific with the recently described but already well-known Cnemidophorus nativo Rocha et al., 1997, which is the valid name because of its qualification herein as a nomen protectum. The preceding specific name cyanomelas (as corrected in an errata section) is misspelled several ways in different copies of Schinz's original description (""cyanom las,"" ""cyanomlas,"" and cyanom""). Loosening, separation, and final loss of the last three letters of movable type in the printing chase probably accounts for the variant misspellings.
Resumo:
Background Data and Objective: There is anecdotal evidence that low-level laser therapy (LLLT) may affect the development of muscular fatigue, minor muscle damage, and recovery after heavy exercises. Although manufacturers claim that cluster probes (LEDT) maybe more effective than single-diode lasers in clinical settings, there is a lack of head-to-head comparisons in controlled trials. This study was designed to compare the effect of single-diode LLLT and cluster LEDT before heavy exercise. Materials and Methods: This was a randomized, placebo-controlled, double-blind cross-over study. Young male volleyball players (n = 8) were enrolled and asked to perform three Wingate cycle tests after 4 x 30 sec LLLT or LEDT pretreatment of the rectus femoris muscle with either (1) an active LEDT cluster-probe (660/850 nm, 10/30mW), (2) a placebo cluster-probe with no output, and (3) a single-diode 810-nm 200-mW laser. Results: The active LEDT group had significantly decreased post-exercise creatine kinase (CK) levels (-18.88 +/- 41.48U/L), compared to the placebo cluster group (26.88 +/- 15.18U/L) (p < 0.05) and the active single-diode laser group (43.38 +/- 32.90U/L) (p<0.01). None of the pre-exercise LLLT or LEDT protocols enhanced performance on the Wingate tests or reduced post-exercise blood lactate levels. However, a non-significant tendency toward lower post-exercise blood lactate levels in the treated groups should be explored further. Conclusion: In this experimental set-up, only the active LEDT probe decreased post-exercise CK levels after the Wingate cycle test. Neither performance nor blood lactate levels were significantly affected by this protocol of pre-exercise LEDT or LLLT.
Resumo:
An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).