936 resultados para Architectures profondes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work shows a theoretical analysis together with numerical and experimental results of transmission characteristics from the microstrip bandpass filters with different geometries. These filters are built over isotropic dielectric substrates. The numerical analysis is made by specifical commercial softwares, like Ansoft Designer and Agilent Advanced Design System (ADS). In addition to these tools, a Matlab Script was built to analyze the filters through the Finite-Difference Time-Domain (FDTD) method. The filters project focused the development of the first stage of filtering in the ITASAT s Transponder receptor, and its integration with the others systems. Some microstrip filters architectures have been studied, aiming the viability of implementation and suitable practical application for the purposes of the ITASAT Project due to its lowspace occupation in the lower UHF frequencies. The ITASAT project is a Universityexperimental project which will build a satellite to integrate the Brazilian Data Collect System s satellite constellation, with efforts of many Brazilian institutes, like for example AEB (Brazilian Spatial Agency), ITA (Technological Institute of Aeronautics), INPE/CRN (National Institute of Spatial Researches/Northeastern Regional Center) and UFRN (Federal University of Rio Grande do Norte). Comparisons were made between numerical and experimental results of all filters, where good agreements could be noticed, reaching the most of the objectives. Also, post-work improvements were suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Industrial automation networks is in focus and is gradually replacing older architectures of systems used in automation world. Among existing automation networks, most prominent standard is the Foundation Fieldbus (FF). This particular standard was chosen for the development of this work thanks to its complete application layer specification and its user interface, organized as function blocks and that allows interoperability among different vendors' devices. Nowadays, one of most seeked solutions on industrial automation are the indirect measurements, that consist in infering a value from measures of other sensors. This can be made through implementation of the so-called software sensors. One of the most used tools in this project and in sensor implementation are artificial neural networks. The absence of a standard solution to implement neural networks in FF environment makes impossible the development of a field-indirect-measurement project, besides other projects involving neural networks, unless a closed proprietary solution is used, which dos not guarantee interoperability among network devices, specially if those are from different vendors. In order to keep the interoperability, this work's goal is develop a solution that implements artificial neural networks in Foundation Fieldbus industrial network environment, based on standard function blocks. Along the work, some results of the solution's implementation are also presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study is to create an artificial neural network (ANN) capable of modeling the transverse elasticity modulus (E2) of unidirectional composites. To that end, we used a dataset divided into two parts, one for training and the other for ANN testing. Three types of architectures from different networks were developed, one with only two inputs, one with three inputs and the third with mixed architecture combining an ANN with a model developed by Halpin-Tsai. After algorithm training, the results demonstrate that the use of ANNs is quite promising, given that when they were compared with those of the Halpín-Tsai mathematical model, higher correlation coefficient values and lower root mean square values were observed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the current major concerns in engineering is the development of aircrafts that have low power consumption and high performance. So, airfoils that have a high value of Lift Coefficient and a low value for the Drag Coefficient, generating a High-Efficiency airfoil are studied and designed. When the value of the Efficiency increases, the aircraft s fuel consumption decreases, thus improving its performance. Therefore, this work aims to develop a tool for designing of airfoils from desired characteristics, as Lift and Drag coefficients and the maximum Efficiency, using an algorithm based on an Artificial Neural Network (ANN). For this, it was initially collected an aerodynamic characteristics database, with a total of 300 airfoils, from the software XFoil. Then, through the software MATLAB, several network architectures were trained, between modular and hierarchical, using the Back-propagation algorithm and the Momentum rule. For data analysis, was used the technique of cross- validation, evaluating the network that has the lowest value of Root Mean Square (RMS). In this case, the best result was obtained for a hierarchical architecture with two modules and one layer of hidden neurons. The airfoils developed for that network, in the regions of lower RMS, were compared with the same airfoils imported into the software XFoil

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dans cet article on cherche à identifier les effets de l'institutionnalisation dans la production de la subjectivité dans un établissement scolaire total. La recherche est basée sur le roman «O Ateneu » de Raul Pompeia, analysé à partir de la théorie de Goffman (1961-1987) relative aux institutions totales. on décrit l itinéraire moral que le personnage Sergio développe, à son entrée à l'internat, évoquant les vicissitudes par lesquelles il passe dans ce contexte institutionnel : période d'adaptation, crises évolutives, initiations sexuelles, problèmes de rivalités, etc. les établissements totaux semblent s'organiser d'une façon caractéristique et fonctionner de manière autonome. on pourra comprendre les problèmes sociaux et les effets sur la subjectivité produite par les institutions totales par l étude des relations de pouvoir subjacentes à ces types d'établissements. Le temps pendant lequel un individu vit comme interné peut laisser des marques profondes dans sa subjectivité et se configure comme un thème personnel approprié.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The control of molecular architectures has been a key factor for the use of Langmuir-Blodgett (LB) films in biosensors, especially because biomolecules can be immobilized with preserved activity. In this paper we investigated the incorporation of tyrosinase (Tyr) in mixed Langmuir films of arachidic acid (AA) and a lutetium bisphthalocyanine (LuPc2), which is confirmed by a large expansion in the surface pressure isotherm. These mixed films of AA-LuPc2 + Tyr could be transferred onto ITO and Pt electrodes as indicated by FTIR and electrochemical measurements, and there was no need for crosslinking of the enzyme molecules to preserve their activity. Significantly, the activity of the immobilised Tyr was considerably higher than in previous work in the literature, which allowed Tyr-containing LB films to be used as highly sensitive voltammetric sensors to detect pyrogallol. Linear responses have been found up to 400 mu M, with a detection limit of 4.87 x 10(-2) mu M (n = 4) and a sensitivity of 1.54 mu A mu M-1 cm(-2). In addition, the Hill coefficient (h = 1.27) indicates cooperation with LuPc2 that also acts as a catalyst. The enhanced performance of the LB-based biosensor resulted therefore from a preserved activity of Tyr combined with the catalytic activity of LuPc2, in a strategy that can be extended to other enzymes and analytes upon varying the LB film architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)