974 resultados para Software-reconfigurable array processing architectures


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blind Source Separation (BSS) refers to the problem of estimate original signals from observed linear mixtures with no knowledge about the sources or the mixing process. Independent Component Analysis (ICA) is a technique mainly applied to BSS problem and from the algorithms that implement this technique, FastICA is a high performance iterative algorithm of low computacional cost that uses nongaussianity measures based on high order statistics to estimate the original sources. The great number of applications where ICA has been found useful reects the need of the implementation of this technique in hardware and the natural paralelism of FastICA favors the implementation of this algorithm on digital hardware. This work proposes the implementation of FastICA on a reconfigurable hardware platform for the viability of it's use in blind source separation problems, more specifically in a hardware prototype embedded in a Field Programmable Gate Array (FPGA) board for the monitoring of beds in hospital environments. The implementations will be carried out by Simulink models and it's synthesizing will be done through the DSP Builder software from Altera Corporation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This master dissertation introduces a study about some aspects that determine the aplication of adaptative arrays in DS-CDMA cellular systems. Some basics concepts and your evolution in the time about celular systems was detailed here, meanly the CDMA tecnique, specialy about spread-codes and funtionaly principies. Since this, the mobile radio enviroment, with your own caracteristcs, and the basics concepts about adaptive arrays, as powerfull spacial filter was aborded. Some adaptative algorithms was introduced too, these are integrants of the signals processing, and are answerable for weights update that influency directly in the radiation pattern of array. This study is based in a numerical analysis of adaptative array system behaviors related to the used antenna and array geometry types. All the simulations was done by Mathematica 4.0 software. The results for weights convergency, square mean error, gain, array pattern and supression capacity based the analisis made here, using RLS (supervisioned) and LSDRMTA (blind) algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion estimation is the main responsible for data reduction in digital video encoding. It is also the most computational damanding step. H.264 is the newest standard for video compression and was planned to double the compression ratio achievied by previous standards. It was developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a partnership effort known as the Joint Video Team (JVT). H.264 presents novelties that improve the motion estimation efficiency, such as the adoption of variable block-size, quarter pixel precision and multiple reference frames. This work defines an architecture for motion estimation in hardware/software, using a full search algorithm, variable block-size and mode decision. This work consider the use of reconfigurable devices, soft-processors and development tools for embedded systems such as Quartus II, SOPC Builder, Nios II and ModelSim

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasingly request for processing power during last years has pushed integrated circuit industry to look for ways of providing even more processing power with less heat dissipation, power consumption, and chip area. This goal has been achieved increasing the circuit clock, but since there are physical limits of this approach a new solution emerged as the multiprocessor system on chip (MPSoC). This approach demands new tools and basic software infrastructure to take advantage of the inherent parallelism of these architectures. The oil exploration industry has one of its firsts activities the project decision on exploring oil fields, those decisions are aided by reservoir simulations demanding high processing power, the MPSoC may offer greater performance if its parallelism can be well used. This work presents a proposal of a micro-kernel operating system and auxiliary libraries aimed to the STORM MPSoC platform analyzing its influence on the problem of reservoir simulation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this study was to compare the artificial tooth positional changes following the flasking and polymerization of complete dentures by a combination of two flasking methods and two polymerization techniques using computer graphic measurements.Materials and Methods: Four groups of waxed complete dentures (n = 10) were invested and polymerized using the following techniques: (1) adding a second investment layer of gypsum and conventional water bath polymerization (Control), (2) adding a second investment layer of gypsum and polymerization with microwave energy (Gyp-micro), (3) adding a second investment layer of silicone (Zetalabor) and conventional polymerization (Silwater), and (4) adding a second investment layer of silicone and polymerization with microwave energy (Silmicro). For each specimen, six segments of interdental distances (A to F) were measured to determine the artificial tooth positions in the waxed and polymerized stages using software program AutoCad R14. The mean values of the changes were statistically compared by univariate ANOVA with Tukey post-hoc test at 5% significance.Results: There were no significant differences among the four groups, except for segment D of the Silmicro group (-0.004 +/- 0.032 cm) in relation to the Gypwater group (0.044 +/- 0.031 cm) (p < 0.05), which presented, repectively, expansion and shrinkage after polymerization.Conclusions: Within the limitations of this study, it was concluded that although the differences were not statistically significant, the use of a silicone investment layer when flasking complete dentures resulted in the least positional changes of the artificial teeth regardless of the polymerization technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The purpose of this study was to compare the dental movement that occurs during the processing of maxillary complete dentures with 3 different base thicknesses, using 2 investment methods, and microwave polymerization.Methods: A sample of 42 denture models was randomly divided into 6 groups (n = 7), with base thicknesses of 1.25, 2.50, and 3.75 mm and gypsum or silicone flask investment. Points were demarcated on the distal surface of the second molars and on the back of the gypsum cast at the alveolar ridge level to allow linear and angular measurement using AutoCAD software. The data were subjected to analysis of variance with double factor, Tukey test and Fisher (post hoc).Results: Angular analysis of the varying methods and their interactions generated a statistical difference (P = 0.023) when the magnitudes of molar inclination were compared. Tooth movement was greater for thin-based prostheses, 1.25 mm (-0.234), versus thick 3.75 mm (0.2395), with antagonistic behavior. Prosthesis investment with silicone (0.053) showed greater vertical change compared with the gypsum investment (0.032). There was a difference between the point of analysis, demonstrating that the changes were not symmetric.Conclusions: All groups evaluated showed change in the position of artificial teeth after processing. The complete denture with a thin base (1.25 mm) and silicone investment showed the worst results, whereas intermediate thickness (2.50 mm) was demonstrated to be ideal for the denture base.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an electronic device conceived to convert common web texts into sequences of corresponding Braille signals, which are immediately reproduced onto an array ( keyboard) of electromechanical actuators. These actuators are reconfigurable in real time, displaying the Braille characters as matrices of points composed by small stems which can be lowered or raised according to the Braille code. The device, together with its conversion software package, can provide direct access to web texts in any personal computer, thus avoiding the use of complicated Braille printers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Compact Muon Solenoid (CMS) detector is described. The detector operates at the Large Hadron Collider (LHC) at CERN. It was conceived to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 10(34)cm(-2)s(-1) (10(27)cm(-2)s(-1)). At the core of the CMS detector sits a high-magnetic-field and large-bore superconducting solenoid surrounding an all-silicon pixel and strip tracker, a lead-tungstate scintillating-crystals electromagnetic calorimeter, and a brass-scintillator sampling hadron calorimeter. The iron yoke of the flux-return is instrumented with four stations of muon detectors covering most of the 4 pi solid angle. Forward sampling calorimeters extend the pseudo-rapidity coverage to high values (vertical bar eta vertical bar <= 5) assuring very good hermeticity. The overall dimensions of the CMS detector are a length of 21.6 m, a diameter of 14.6 m and a total weight of 12500 t.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents the design of a fuzzy controller with simplified architecture that use an artificial neural network working as the aggregation operator for several active fuzzy rules. The simplified architecture of the fuzzy controller is used to minimize the time processing used in the closed loop system operation, the basic procedures of fuzzification are simplified to maximum while all the inference procedures are computed in a private way. As consequence, this simplified architecture allows a fast and easy configuration of the simplified fuzzy controller. The structuring of the fuzzy rules that define the control actions is previously computed using an artificial neural network based on CMAC Cerebellar Model Articulation Controller. The operational limits are standardized and all the control actions are previously calculated and stored in memory. For applications, results and conclusions several configurations of this fuzzy controller are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents one software developed to process solar radiation data. This software can be used in meteorological and climatic stations, and also as a support for solar radiation measurements in researches of solar energy availability allowing data quality control, statistical calculations and validation of models, as well as ease interchanging of data. (C) 1999 Elsevier B.V. Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper adresses the problem on processing biological data such as cardiac beats, audio and ultrasonic range, calculating wavelet coefficients in real time, with processor clock running at frequency of present ASIC's and FPGA. The Paralell Filter Architecture for DWT has been improved, calculating wavelet coefficients in real time with hardware reduced to 60%. The new architecture, which also processes IDWT, is implemented with the Radix-2 or the Booth-Wallace Constant multipliers. Including series memory register banks, one integrated circuit Signal Analyzer, ultrasonic range, is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of architectures proposed for developing Distributed Virtual Environment (DVE) allow limited number of users. To support the development of applications using the internet infrastructure, with hundred or, perhaps, thousands users logged simultaneously on DVE, several techniques for managing resources, such as bandwidth and capability of processing, must be implemented. The strategy presented in this paper combines methods to attain the scalability required, In special the multicast protocol at application level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the fast innovation of the hardware and software technologies using rapid prototyping devices, with application in the robotics and automation, more and more it becomes necessary the development of applications based on methodologies that facilitate future modifications, updates and enhancements in the original projected system. This paper presents a conception of mobile robots using rapid prototyping, distributing the several control actions in growing levels of complexity and using resources of reconfigurable computing proposal oriented to embed systems implementation. Software and the hardware are structuralized in independents blocks, with connection through common bus. The study and applications of new structures control that permits good performance in relation to the parameter variations. This kind of controller can be tested on different platform representing the wheeled mobile robots using reprogrammable logic components (FPGA). © 2006 IEEE.