934 resultados para interlocked architectures


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study is to create an artificial neural network (ANN) capable of modeling the transverse elasticity modulus (E2) of unidirectional composites. To that end, we used a dataset divided into two parts, one for training and the other for ANN testing. Three types of architectures from different networks were developed, one with only two inputs, one with three inputs and the third with mixed architecture combining an ANN with a model developed by Halpin-Tsai. After algorithm training, the results demonstrate that the use of ANNs is quite promising, given that when they were compared with those of the Halpín-Tsai mathematical model, higher correlation coefficient values and lower root mean square values were observed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the current major concerns in engineering is the development of aircrafts that have low power consumption and high performance. So, airfoils that have a high value of Lift Coefficient and a low value for the Drag Coefficient, generating a High-Efficiency airfoil are studied and designed. When the value of the Efficiency increases, the aircraft s fuel consumption decreases, thus improving its performance. Therefore, this work aims to develop a tool for designing of airfoils from desired characteristics, as Lift and Drag coefficients and the maximum Efficiency, using an algorithm based on an Artificial Neural Network (ANN). For this, it was initially collected an aerodynamic characteristics database, with a total of 300 airfoils, from the software XFoil. Then, through the software MATLAB, several network architectures were trained, between modular and hierarchical, using the Back-propagation algorithm and the Momentum rule. For data analysis, was used the technique of cross- validation, evaluating the network that has the lowest value of Root Mean Square (RMS). In this case, the best result was obtained for a hierarchical architecture with two modules and one layer of hidden neurons. The airfoils developed for that network, in the regions of lower RMS, were compared with the same airfoils imported into the software XFoil

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The control of molecular architectures has been a key factor for the use of Langmuir-Blodgett (LB) films in biosensors, especially because biomolecules can be immobilized with preserved activity. In this paper we investigated the incorporation of tyrosinase (Tyr) in mixed Langmuir films of arachidic acid (AA) and a lutetium bisphthalocyanine (LuPc2), which is confirmed by a large expansion in the surface pressure isotherm. These mixed films of AA-LuPc2 + Tyr could be transferred onto ITO and Pt electrodes as indicated by FTIR and electrochemical measurements, and there was no need for crosslinking of the enzyme molecules to preserve their activity. Significantly, the activity of the immobilised Tyr was considerably higher than in previous work in the literature, which allowed Tyr-containing LB films to be used as highly sensitive voltammetric sensors to detect pyrogallol. Linear responses have been found up to 400 mu M, with a detection limit of 4.87 x 10(-2) mu M (n = 4) and a sensitivity of 1.54 mu A mu M-1 cm(-2). In addition, the Hill coefficient (h = 1.27) indicates cooperation with LuPc2 that also acts as a catalyst. The enhanced performance of the LB-based biosensor resulted therefore from a preserved activity of Tyr combined with the catalytic activity of LuPc2, in a strategy that can be extended to other enzymes and analytes upon varying the LB film architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a methodology to analyze electric power systems transient stability for first swing using a neural network based on adaptive resonance theory (ART) architecture, called Euclidean ARTMAP neural network. The ART architectures present plasticity and stability characteristics, which are very important for the training and to execute the analysis in a fast way. The Euclidean ARTMAP version provides more accurate and faster solutions, when compared to the fuzzy ARTMAP configuration. Three steps are necessary for the network working, training, analysis and continuous training. The training step requires much effort (processing) while the analysis is effectuated almost without computational effort. The proposed network allows approaching several topologies of the electric system at the same time; therefore it is an alternative for real time transient stability of electric power systems. To illustrate the proposed neural network an application is presented for a multi-machine electric power systems composed of 10 synchronous machines, 45 buses and 73 transmission lines. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is increasingly common use of a single computer system using different devices - personal computers, telephones cellular and others - and software platforms - systems graphical user interfaces, Web and other systems. Depending on the technologies involved, different software architectures may be employed. For example, in Web systems, it utilizes architecture client-server - usually extended in three layers. In systems with graphical interfaces, it is common architecture with the style MVC. The use of architectures with different styles hinders the interoperability of systems with multiple platforms. Another aggravating is that often the user interface in each of the devices have structure, appearance and behaviour different on each device, which leads to a low usability. Finally, the user interfaces specific to each of the devices involved, with distinct features and technologies is a job that needs to be done individually and not allow scalability. This study sought to address some of these problems by presenting a reference architecture platform-independent and that allows the user interface can be built from an abstract specification described in the language in the specification of the user interface, the MML. This solution is designed to offer greater interoperability between different platforms, greater consistency between the user interfaces and greater flexibility and scalability for the incorporation of new devices