40 resultados para Simulacao paralela
Resumo:
VALENTIM, R. A. M.; NOGUEIRA, I. A.;ROCHA NETO, A. F. Utilizando a porta paralela para controle remoto de um dispositivo. Revista da FARN, Natal, v. 2, p. 103-114, 2002.
Resumo:
The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency
Resumo:
This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better
Resumo:
This study proposes a solution responsible for scheduling data processing with variable demand in cloud environments. The system built check specific variables to the business context of a company incubated at Digital Metropole Institute of UFRN. Such a system generates an identification strategy machinery designs available in a cloud environment, focusing on processing performance, using data load balancing strategies and activities of parallelism in the software execution flow. The goal is to meet the seasonal demand within a standard time limit set by the company, controlling operating costs by using cloud services in the IaaS layer.
Resumo:
VALENTIM, R. A. M.; NOGUEIRA, I. A.;ROCHA NETO, A. F. Utilizando a porta paralela para controle remoto de um dispositivo. Revista da FARN, Natal, v. 2, p. 103-114, 2002.
Resumo:
Seted in the context of the educational actions of Casa Renascer, a non-governmental organization, located in Natal city, which had as its primary purpose the care with children and adolescent girls in vulnerable situations, this research is based on describing and analysis on the topic in the creative process developed by Asmarias Theatre Company from 1993 to 2003, a process that culminated in the assembly of the dramatic text, Mateus e Mateusa, of Qorpo-Santo. In this research is focused on the route of the Theatre Company has done so much theater in its early history (1993), with the practice of reading and dramatic writing in the preparation of didactic material called Primer of Inventions, as in the procedures with theater street and forum theater (1997 to 2000) to the reunion in 2001 of seven teenagers which articulated the last group formation next to the assembly's text Qorpo-Santo (2002- 2003). During the development on this learning, the evolution of the creative process based on institutional theme when asked if one can provide moments of educational experiences through the traditional form of theater, with reference to the issues inherent in the dramatic texts considered classics. The debate on the issue through research and analysis in its descriptions and finds in the interim between his past and present indications that lead to conclusive guises. The methodology, which is guided by research, is based in theatrical archeology (PAVIS, 2005), the evidential paradigm (GINZBURG, 1989) and the second approaches the experiences narrated by Benjamin (1985). We selected documents in formats of written texts, photographic and filmed, and identified in these files, marks and tracks which took us to understand the subject in the creative process of Asmarias Theatre Company during the tests with the dramatic text, Mateus and Mateusa, of Qorpo-Santo. In this theatrical practice, located in the field of the theater pedagogy, it appears that the actions across thematic theater in the Casa Renascer and allowed the formation of critical aesthetic perspective and personal social dimension of the subjects involved. The theme has gained a significant proportion in the theatrical activity as a guiding point of the creative process of Theatre Company, taking in the theatrical art form. In this sense, the creative process with the dramatic and classic texts won the educational dimension to address the issue in the movement of the drama as the focus of individual creation which added to the collective universe of the interactive game
Resumo:
This research is about the use of the coconut´s endocarp (nucifera linn) and the waste of derivatives of wood and furniture as raw material to technological use. In that sense, the lignocellulosic waste is used for manufacture of homogeneous wood sheet agglomerate (LHWS) and lignocellulosic load which take part of a polymeric composite with fiber glass E (GFRP-WC). In the manufacturing of the homogeneous wood sheet agglomerate (LHWS), it was used mamona´s resin as waste s agglutinating element. The plates were taken up in a hydraulic press engine, heated, with temperature control, where they were manufactured for different percentage of waste wood and coconuts nucífera linn. Physical tests were conducted to determine the absorption of water, density, damp grade (in two hours and twenty-four hours), swelling thickness (in two hours and twenty-four hours), and mechanical tests to evaluate the parallel tensile strength (internal stick) and bending and the static (steady) flexural. The physical test´s results indicate that the LHWS can be classified as bonded wood plate of high-density and with highly water resistant. In the mechanical tests it was possible to establish that LHWS presents different characteristics when submitted to uniaxial tensile and to the static (steady) flexural, since brittle and elasticity module had a variation according to the amount of dry endocarp used to manufacture each trace of LHWS. The GFRP-WC was industrially manufactured by a hand-lay-up process where the fiber glass E was used as reinforcement the lignocellulósic´s waste as load. The matrix was made with ortofitalic unsaturated polyester resin. Physical and mechanical tests were performed in presence of saturated humidity and dry. The results indicated good performance of the GFRP-WC, as traction as in flexion in three points. The presence of water influenced the modules obtained in the flexural and tensile but there were no significant alteration in the properties analyzed. As for the fracture, the analysis showed that the effects are more harmful in the presence of damp, under the action of loading tested, but despite this, the fracture was well defined starting in the external parts and spreading to the internal regions when one when it reaches the hybrid load
Resumo:
One of the main activities in the petroleum engineering is to estimate the oil production in the existing oil reserves. The calculation of these reserves is crucial to determine the economical feasibility of your explotation. Currently, the petroleum industry is facing problems to analyze production due to the exponentially increasing amount of data provided by the production facilities. Conventional reservoir modeling techniques like numerical reservoir simulation and visualization were well developed and are available. This work proposes intelligent methods, like artificial neural networks, to predict the oil production and compare the results with the ones obtained by the numerical simulation, method quite a lot used in the practice to realization of the oil production prediction behavior. The artificial neural networks will be used due your learning, adaptation and interpolation capabilities
Resumo:
With the growth of energy consumption worldwide, conventional reservoirs, the reservoirs called "easy exploration and production" are not meeting the global energy demand. This has led many researchers to develop projects that will address these needs, companies in the oil sector has invested in techniques that helping in locating and drilling wells. One of the techniques employed in oil exploration process is the reverse time migration (RTM), in English, Reverse Time Migration, which is a method of seismic imaging that produces excellent image of the subsurface. It is algorithm based in calculation on the wave equation. RTM is considered one of the most advanced seismic imaging techniques. The economic value of the oil reserves that require RTM to be localized is very high, this means that the development of these algorithms becomes a competitive differentiator for companies seismic processing. But, it requires great computational power, that it still somehow harms its practical success. The objective of this work is to explore the implementation of this algorithm in unconventional architectures, specifically GPUs using the CUDA by making an analysis of the difficulties in developing the same, as well as the performance of the algorithm in the sequential and parallel version
Resumo:
Particle Swarm Optimization is a metaheuristic that arose in order to simulate the behavior of a number of birds in flight, with its random movement locally, but globally determined. This technique has been widely used to address non-liner continuous problems and yet little explored in discrete problems. This paper presents the operation of this metaheuristic, and propose strategies for implementation of optimization discret problems as form of execution parallel as sequential. The computational experiments were performed to instances of the TSP, selected in the library TSPLIB contenct to 3038 nodes, showing the improvement of performance of parallel methods for their sequential versions, in executation time and results
Algoritmo evolutivo paralelo para o problema de atribuição de localidades a anéis em redes sonet/sdh
Resumo:
The telecommunications play a fundamental role in the contemporary society, having as one of its main roles to give people the possibility to connect them and integrate them into society in which they operate and, therewith, accelerate development through knowledge. But as new technologies are introduced on the market, increases the demand for new products and services that depend on the infrastructure offered, making the problems of planning of telecommunication networks become increasingly large and complex. Many of these problems, however, can be formulated as combinatorial optimization models, and the use of heuristic algorithms can help solve these issues in the planning phase. This paper proposes the development of a Parallel Evolutionary Algorithm to be applied to telecommunications problem known in the literature as SONET Ring Assignment Problem SRAP. This problem is the class NP-hard and arises during the physical planning of a telecommunication network and consists of determining the connections between locations (customers), satisfying a series of constrains of the lowest possible cost. Experimental results illustrate the effectiveness of the Evolutionary Algorithm parallel, over other methods, to obtain solutions that are either optimal or very close to it
Resumo:
This work describes the study and the implementation of the vector speed control for a three-phase Bearingless induction machine with divided winding of 4 poles and 1,1 kW using the neural rotor flux estimation. The vector speed control operates together with the radial positioning controllers and with the winding currents controllers of the stator phases. For the radial positioning, the forces controlled by the internal machine magnetic fields are used. For the radial forces optimization , a special rotor winding with independent circuits which allows a low rotational torque influence was used. The neural flux estimation applied to the vector speed controls has the objective of compensating the parameter dependences of the conventional estimators in relation to the parameter machine s variations due to the temperature increases or due to the rotor magnetic saturation. The implemented control system allows a direct comparison between the respective responses of the speed and radial positioning controllers to the machine oriented by the neural rotor flux estimator in relation to the conventional flux estimator. All the system control is executed by a program developed in the ANSI C language. The DSP resources used by the system are: the Analog/Digital channels converters, the PWM outputs and the parallel and RS-232 serial interfaces, which are responsible, respectively, by the DSP programming and the data capture through the supervisory system
Resumo:
This work describes the study and the implementation of the speed control for a three-phase induction motor of 1,1 kW and 4 poles using the neural rotor flux estimation. The vector speed control operates together with the winding currents controller of the stator phasis. The neural flux estimation applied to the vector speed controls has the objective of compensating the parameter dependences of the conventional estimators in relation to the parameter machine s variations due to the temperature increases or due to the rotor magnetic saturation. The implemented control system allows a direct comparison between the respective responses of the speed controls to the machine oriented by the neural rotor flux estimator in relation to the conventional flux estimator. All the system control is executed by a program developed in the ANSI C language. The main DSP recources used by the system are, respectively, the Analog/Digital channels converters, the PWM outputs and the parallel and RS-232 serial interfaces, which are responsible, respectively, by the DSP programming and the data capture through the supervisory system