30 resultados para Computational power
Resumo:
In this article we explore the NVIDIA graphical processing units (GPU) computational power in cryptography using CUDA (Compute Unified Device Architecture) technology. CUDA makes the general purpose computing easy using the parallel processing presents in GPUs. To do this, the NVIDIA GPUs architectures and CUDA are presented, besides cryptography concepts. Furthermore, we do the comparison between the versions executed in CPU with the parallel version of the cryptography algorithms Advanced Encryption Standard (AES) and Message-digest Algorithm 5 (MD5) wrote in CUDA. © 2011 AISTI.
Resumo:
This paper presents numerical modeling of a turbulent natural gas flow through a non-premixed industrial burner of a slab reheating furnace. The furnace is equipped with diffusion side swirl burners capable of utilizing natural gas or coke oven gas alternatively through the same nozzles. The study is focused on one of the burners of the preheating zone. Computational Fluid Dynamics simulation has been used to predict the burner orifice turbulent flow. Flow rate and pressure at burner upstream were validated by experimental measurements. The outcomes of the numerical modeling are analyzed for the different turbulence models in terms of pressure drop, velocity profiles, and orifice discharge coefficient. The standard, RNG, and Realizable k-epsilon models and Reynolds Stress Model (RSM) have been used. The main purpose of the numerical investigation is to determine the turbulence model that more consistently reproduces the experimental results of the flow through an industrial non-premixed burner orifice. The comparisons between simulations indicate that all the models tested satisfactorily and represent the experimental conditions. However, the Realizable k-epsilon model seems to be the most appropriate turbulence model, since it provides results that are quite similar to the RSM and RNG k-epsilon models, requiring only slightly more computational power than the standard k-epsilon model. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The present paper introduces a new model of fuzzy neuron, one which increases the computational power of the artificial neuron, turning it also into a symbolic processing device. This model proposes the synapsis to be symbolically and numerically defined, by means of the assignment of tokens to the presynaptic and postsynaptic neurons. The matching or concatenation compatibility between these tokens is used to decided about the possible connections among neurons of a given net. The strength of the compatible synapsis is made dependent on the amount of the available presynaptic and post synaptic tokens. The symbolic and numeric processing capacity of the new fuzzy neuron is used here to build a neural net (JARGON) to disclose the existing knowledge in natural language data bases such as medical files, set of interviews, and reports about engineering operations.
Resumo:
Huge image collections are becoming available lately. In this scenario, the use of Content-Based Image Retrieval (CBIR) systems has emerged as a promising approach to support image searches. The objective of CBIR systems is to retrieve the most similar images in a collection, given a query image, by taking into account image visual properties such as texture, color, and shape. In these systems, the effectiveness of the retrieval process depends heavily on the accuracy of ranking approaches. Recently, re-ranking approaches have been proposed to improve the effectiveness of CBIR systems by taking into account the relationships among images. The re-ranking approaches consider the relationships among all images in a given dataset. These approaches typically demands a huge amount of computational power, which hampers its use in practical situations. On the other hand, these methods can be massively parallelized. In this paper, we propose to speedup the computation of the RL-Sim algorithm, a recently proposed image re-ranking approach, by using the computational power of Graphics Processing Units (GPU). GPUs are emerging as relatively inexpensive parallel processors that are becoming available on a wide range of computer systems. We address the image re-ranking performance challenges by proposing a parallel solution designed to fit the computational model of GPUs. We conducted an experimental evaluation considering different implementations and devices. Experimental results demonstrate that significant performance gains can be obtained. Our approach achieves speedups of 7x from serial implementation considering the overall algorithm and up to 36x on its core steps.
Resumo:
Techniques of image combination, with extraction of objects to set a final scene, are very used in applications from photos montages to cinematographic productions. These techniques are called digital matting. With them is possible to decrease the cost of productions, because it is not necessary for the actor to be filmed in the location where the final scene occurs. This feature also favors its use in programs made to digital television, which demands a high quality image. Many digital matting algorithms use markings done on the images, to demarcate what is the foreground, the background and the uncertainty areas. This marking is called trimap, which is a triple map containing these three informations. The trimap is done, typically, from manual markings. In this project, methods were created that can be used in digital matting algorithms, with restriction of time and without human interaction, that is, the creation of an algorithm that generates the trimap automatically. This last one can be generated from the difference between a color of an arbitrary background and the foreground, or by using a depth map. It was also created a matting method, based on the Geodesic Matting (BAI; SAPIRO, 2009), which has an inferior processing time then the original one. Aiming to improve the performance of the applications that generates the trimap and of the algorithms that generates the alphamap (map that associates a value to the transparency of each pixel of the image), allowing its use in applications with time restrictions, it was used the CUDA architecture. Taking advantage, this way, of the computational power and the features of the GPGPU, which is massively parallel
Resumo:
Communities are present on physical, chemical and biological systems and their identification is fundamental for the comprehension of the behavior of these systems. Recently, available data related to complex networks have grown exponentially, demanding more computational power. The Graphical Processing Unit (GPU) is a cost effective alternative suitable for this purpose. We investigate the convenience of this for network science by proposing a GPU based implementation of Newman community detection algorithm. We showed that the processing time of matrix multiplications of GPUs grow slower than CPUs in relation to the matrix size. It was proven, thus, that GPU processing power is a viable solution for community dentification simulation that demand high computational power. Our implementation was tested on an integrated biological network for the bacterium Escherichia coli
Resumo:
The technologies are rapidly developing, but some of them present in the computers, as for instance their processing capacity, are reaching their physical limits. It is up to quantum computation offer solutions to these limitations and issues that may arise. In the field of information security, encryption is of paramount importance, being then the development of quantum methods instead of the classics, given the computational power offered by quantum computing. In the quantum world, the physical states are interrelated, thus occurring phenomenon called entanglement. This study presents both a theoretical essay on the merits of quantum mechanics, computing, information, cryptography and quantum entropy, and some simulations, implementing in C language the effects of entropy of entanglement of photons in a data transmission, using Von Neumann entropy and Tsallis entropy.
Resumo:
Modal analysis is widely approached in the classic theory of power systems modelling. This technique is also applied to model multiconductor transmission lines and their self and mutual electrical parameters. However, this methodology has some particularities and inaccuracies for specific applications, which are not clearly described in the technical literature. This study provides a brief review on modal decoupling applied in transmission line digital models and thereafter a novel and simplified computational routine is proposed to overcome the possible errors embedded by the modal decoupling in the simulation/ modelling computational algorithm. © The Institution of Engineering and Technology 2013.
Resumo:
A neural approach to solve the problem defined by the economic load dispatch in power systems is presented in this paper, Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements the ability of neural networks to realize some complex nonlinear function makes them attractive for system optimization the neural networks applyed in economic load dispatch reported in literature sometimes fail to converge towards feasible equilibrium points the internal parameters of the modified Hopfield network developed here are computed using the valid-subspace technique These parameters guarantee the network convergence to feasible quilibrium points, A solution for the economic load dispatch problem corresponds to an equilibrium point of the network. Simulation results and comparative analysis in relation to other neural approaches are presented to illustrate efficiency of the proposed approach.
Detection and Identification of Abnormalities in Customer Consumptions in Power Distribution Systems
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper proposes the application of computational intelligence techniques to assist complex problems concerning lightning in transformers. In order to estimate the currents related to lightning in a transformer, a neural tool is presented. ATP has generated the training vectors. The input variables used in Artificial Neural Networks (ANN) were the wave front time, the wave tail time, the voltage variation rate and the output variable is the maximum current in the secondary of the transformer. These parameters can define the behavior and severity of lightning. Based on these concepts and from the results obtained, it can be verified that the overvoltages at the secondary of transformer are also affected by the discharge waveform in a similar way to the primary side. By using the tool developed, the high voltage process in the distribution transformers can be mapped and estimated with more precision aiding the transformer project process, minimizing empirics and evaluation errors, and contributing to minimize the failure rate of transformers. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work presents a methodology to analyze electric power systems transient stability for first swing using a neural network based on adaptive resonance theory (ART) architecture, called Euclidean ARTMAP neural network. The ART architectures present plasticity and stability characteristics, which are very important for the training and to execute the analysis in a fast way. The Euclidean ARTMAP version provides more accurate and faster solutions, when compared to the fuzzy ARTMAP configuration. Three steps are necessary for the network working, training, analysis and continuous training. The training step requires much effort (processing) while the analysis is effectuated almost without computational effort. The proposed network allows approaching several topologies of the electric system at the same time; therefore it is an alternative for real time transient stability of electric power systems. To illustrate the proposed neural network an application is presented for a multi-machine electric power systems composed of 10 synchronous machines, 45 buses and 73 transmission lines. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper presents for the first time how to easily incorporate facts devices in an optimal active power flow model such that an efficient interior-point method may be applied. The optimal active power flow model is based on a network flow approach instead of the traditional nodal formulation that allows the use of an efficiently predictor-corrector interior point method speed up by sparsity exploitation. The mathematical equivalence between the network flow and the nodal models is addressed, as well as the computational advantages of the former considering the solution by interior point methods. The adequacy of the network flow model for representing facts devices is presented and illustrated on a small 5-bus system. The model was implemented using Matlab and its performance was evaluated with the 3,397-bus and 4,075-branch Brazilian power system which show the robustness and efficiency of the formulation proposed. The numerical results also indicate an efficient tool for optimal active power flow that is suitable for incorporating facts devices.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)