833 resultados para Parallel computers
Resumo:
In this paper, the use of differential evolution ( DE), a global search technique inspired by evolutionary theory, to find the parameters that are required to achieve optimum dynamic response of parallel operation of inverters with no interconnection among the controllers is proposed. Basically, in order to reach such a goal, the system is modeled in a certain way that the slopes of P-omega and Q-V curves are the parameters to be tuned. Such parameters, when properly tuned, result in system's eigenvalues located in positions that assure the system's stability and oscillation-free dynamic response with minimum settling time. This paper describes the modeling approach and provides an overview of the motivation for the optimization and a description of the DE technique. Simulation and experimental results are also presented, and they show the viability of the proposed method.
Resumo:
Digital image processing is a field that demands great processing capacity. As such it becomes relevant to implement software that is based on the distribution of the processing into several nodes divided by computers belonging to the same network. Specifically discussed in this work are distributed algorithms of compression and expansion of images using the discrete cosine transform. The results show that the savings in processing time obtained due to the parallel algorithms in comparison to its sequential equivalents is a function that depends on the resolution of the image and the complexity of the involved calculation; that is efficiency is greater the longer the processing period is in terms of the time involved for the communication between the network points.
Resumo:
This paper presents a consistent and concise analysis of the free and forced vibration of a mass supported by a parallel combination of a spring and an elastically supported damper (a Zener model). The results are presented in a compact form and the physical behaviour of the system is emphasised. This system is very similar to the conventional single-degree-of freedom system (sdof)-(Voigt model), but the dynamics can be quite different depending on the system parameters. The usefulness of the additional spring in series with the damper is investigated, and optimum damping values for the system subject to different types of excitation are determined and compared.There are three roots to the characteristic equation for the Zener model; two are complex conjugates and the third is purely real. It is shown that it is not possible to achieve critical damping of the complex roots unless the additional stiffness is at least eight times that of the main spring. For a harmonically excited system, there are some possible advantages in using the additional spring when the transmitted force to the base is of interest, but when the displacement response of the system is of interest then the benefits are marginal. It is shown that the additional spring affords no advantages when the system is excited by white noise. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it
Resumo:
The visualization of three-dimensional(3D)images is increasigly being sed in the area of medicine, helping physicians diagnose desease. the advances achived in scaners esed for acquisition of these 3d exames, such as computerized tumography(CT) and Magnetic Resonance imaging (MRI), enable the generation of images with higher resolutions, thus, generating files with much larger sizes. Currently, the images of computationally expensive one, and demanding the use of a righ and computer for such task. The direct remote acess of these images thruogh the internet is not efficient also, since all images have to be trasferred to the user´s equipment before the 3D visualization process ca start. with these problems in mind, this work proposes and analyses a solution for the remote redering of 3D medical images, called Remote Rendering (RR3D). In RR3D, the whole hedering process is pefomed a server or a cluster of servers, with high computational power, and only the resulting image is tranferred to the client, still allowing the client to peform operations such as rotations, zoom, etc. the solution was developed using web services written in java and an architecture that uses the scientific visualization packcage paraview, the framework paraviewWeb and the PACS server DCM4CHEE.The solution was tested with two scenarios where the rendering process was performed by a sever with graphics hadwere (GPU) and by a server without GPUs. In the scenarios without GPUs, the soluction was executed in parallel with several number of cores (processing units)dedicated to it. In order to compare our solution to order medical visualization application, a third scenario was esed in the rendering process, was done locally. In all tree scenarios, the solution was tested for different network speeds. The solution solved satisfactorily the problem with the delay in the transfer of the DICOM files, while alowing the use of low and computers as client for visualizing the exams even, tablets and smart phones
Resumo:
The development of computers and algorithms capable of making increasingly accurate and rapid calculations as well as the theoretic foundation provided by quantum mechanics has turned computer simulation into a valuable research tool. The importance of such a tool is due to its success in describing the physical and chemical properties of materials. One way of modifying the electronic properties of a given material is by applying an electric field. These effects are interesting in nanocones because their stability and geometric structure make them promising candidates for electron emission devices. In our study we calculated the first principles based on the density functional theory as implemented in the SIESTA code. We investigated aluminum nitride (AlN), boron nitride (BN) and carbon (C), subjected to external parallel electric field, perpendicular to their main axis. We discuss stability in terms of formation energy, using the chemical potential approach. We also analyze the electronic properties of these nanocones and show that in some cases the perpendicular electric field provokes a greater gap reduction when compared to the parallel field
Resumo:
The analysis of alcoholic beverages for the important carcinogenic contaminant ethyl carbamate is very time-consuming and expensive. Due to possible matrix interferences, sample cleanup using diatomaceous earth (Extrelut) column is required prior to gas chromatographic and mass spectrometric measurement. A limiting step in this process is the rotary evaporation of the eluate containing the analyte in organic solvents, which is currently conducted manually and requires approximately 20-30 min per sample. This paper introduces the use of a parallel evaporation device for ethyl carbamate analysis, which allows for the simultaneous evaporation of 12 samples to a specified residual volume without manual intervention. A more efficient and, less expensive analysis is therefore possible. The method validation showed no differences between the fully-automated parallel evaporation and the manual operation. The applicability was proven by analyzing authentic spirit samples from Germany, Canada and Brazil. It is interesting to note that Brazilian cachacas had a relatively high incidence for ethyl carbamate contamination (55% of all samples were above 0.15 mg/l), which may be of public health relevance and requires further evaluation.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A thorough study of the thermal performance of multipass parallel cross-flow and counter-cross-flow heat exchangers has been carried out by applying a new numerical procedure. According to this procedure, the heat exchanger is discretized into small elements following the tube-side fluid circuits. Each element is itself a one-pass mixed-unmixed cross-flow heat exchanger. Simulated results have been validated through comparisons to results from analytical solutions for one- to four-pass, parallel cross-flow and counter-cross-flow arrangements. Very accurate results have been obtained over wide ranges of NTU (number of transfer units) and C* (heat capacity rate ratio) values. New effectiveness data for the aforementioned configurations and a higher number of tube passes is presented along with data for a complex flow configuration proposed elsewhere. The proposed procedure constitutes a useful research tool both for theoretical and experimental studies of cross-flow heat exchangers thermal performance.
Resumo:
In this paper, we consider the extension of the Brandt theory of elasticity of the Abrikosov flux-line lattice for a uniaxial superconductor for the case of parallel flux lines. The results show that the effect of the anisotropy is to rescale the components of the wave vector k and the magnetic field and order-parameter wave vector cut off by a geometrical parameter previously introduced by Kogan.
Resumo:
An approach for solving reactive power planning problems is presented, which is based on binary search techniques and the use of a special heuristic to obtain a discrete solution. Two versions were developed, one to run on conventional (sequential) computers and the other to run on a distributed memory (hypercube) machine. This latter parallel processing version employs an asynchronous programming model. Once the set of candidate buses has been defined, the program gives the location and size of the reactive sources needed(if any) in keeping with operating and security constraints.
Resumo:
The simulated annealing optimization technique has been successfully applied to a number of electrical engineering problems, including transmission system expansion planning. The method is general in the sense that it does not assume any particular property of the problem being solved, such as linearity or convexity. Moreover, it has the ability to provide solutions arbitrarily close to an optimum (i.e. it is asymptotically convergent) as the cooling process slows down. The drawback of the approach is the computational burden: finding optimal solutions may be extremely expensive in some cases. This paper presents a Parallel Simulated Annealing, PSA, algorithm for solving the long term transmission network expansion planning problem. A strategy that does not affect the basic convergence properties of the Sequential Simulated Annealing algorithm have been implementeded and tested. The paper investigates the conditions under which the parallel algorithm is most efficient. The parallel implementations have been tested on three example networks: a small 6-bus network, and two complex real-life networks. Excellent results are reported in the test section of the paper: in addition to reductions in computing times, the Parallel Simulated Annealing algorithm proposed in the paper has shown significant improvements in solution quality for the largest of the test networks.