954 resultados para Image processing techniques
Resumo:
A new high performance architecture for the computation of all the DCT operations adopted in the H.264/AVC and HEVC standards is proposed in this paper. Contrasting to other dedicated transform cores, the presented multi-standard transform architecture is supported on a completely configurable, scalable and unified structure, that is able to compute not only the forward and the inverse 8×8 and 4×4 integer DCTs and the 4×4 and 2×2 Hadamard transforms defined in the H.264/AVC standard, but also the 4×4, 8×8, 16×16 and 32×32 integer transforms adopted in HEVC. Experimental results obtained using a Xilinx Virtex-7 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which outperforms its more prominent related designs by at least 1.8 times. When integrated in a multi-core embedded system, this architecture allows the computation, in real-time, of all the transforms mentioned above for resolutions as high as the 8k Ultra High Definition Television (UHDTV) (7680×4320 @ 30fps).
Resumo:
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo Automação e Electrónica Industrial
Resumo:
Computational Vision stands as the most comprehensive way of knowing the surrounding environment. Accordingly to that, this study aims to present a method to obtain from a common webcam, environment information to guide a mobile differential robot through a path similar to a roadway.
Resumo:
Trabalho Final de Mestrado elaborado no Laboratório Nacional de Engenharia Civil (LNEC) para a obtenção do grau de Mestre em Engenharia Civil pelo Instituto Superior de Engenharia de Lisboa no âmbito do protocolo de Cooperação entre o ISEL e o LNEC
Resumo:
Computational Vision stands as the most comprehensive way of knowing the surrounding environment. Accordingly to that, this study aims to present a method to obtain from a common webcam, environment information to guide a mobile differential robot through a path similar to a roadway.
Resumo:
Fiber reinforced plastics are increasing their importance as one of the most interesting groups of material on account of their low weight, high strength, and stiffness. To obtain good quality holes, it is important to identify the type of material, ply stacking sequence, and fiber orientation. In this article, the drilling of quasi-isotropic hybrid carbon +glass/epoxy plates is analyzed. Two commercial drills and a special step drill are compared considering the thrust force and delamination extension. Results suggest that the proposed step drill can be a suitable option in laminate drilling.
Resumo:
In this work, a comparative study on different drill point geometries and feed rate for composite laminates drilling is presented. For this goal, thrust force monitoring during drilling, hole wall roughness measurement and delamination extension assessment after drilling is accomplished. Delamination is evaluated using enhanced radiography combined with a dedicated computational platform that integrates algorithms of image processing and analysis. An experimental procedure was planned and consequences were evaluated. Results show that a cautious combination of the factors involved, like drill tip geometry or feed rate, can promote the reduction of delamination damage.
Resumo:
PTDC/EME–TME/66207/2006 e POSC/EEA-SRI/55386/2004
Resumo:
Drilling of carbon fibre/epoxy laminates is usually carried out using standard drills. However, it is necessary to adapt the processes and/or tooling as the risk of delamination, or other damages, is high. These problems can affect mechanical properties of produced parts, therefore, lower reliability. In this paper, four different drills – three commercial and a special step (prototype) – are compared in terms of thrust force during drilling and delamination. In order to evaluate damage, enhanced radiography is applied. The resulting images were then computational processed using a previously developed image processing and analysis platform. Results show that the prototype drill had encouraging results in terms of maximum thrust force and delamination reduction. Furthermore, it is possible to state that a correct choice of drill geometry, particularly the use of a pilot hole, a conservative cutting speed – 53 m/min – and a low feed rate – 0.025 mm/rev – can help to prevent delamination.
Resumo:
The characteristics of carbon fibre reinforced laminates have widened their use from aerospace to domestic appliances, and new possibilities for their usage emerge almost daily. In many of the possible applications, the laminates need to be drilled for assembly purposes. It is known that a drilling process that reduces the drill thrust force can decrease the risk of delamination. In this work, damage assessment methods based on data extracted from radiographic images are compared and correlated with mechanical test results—bearing test and delamination onset test—and analytical models. The results demonstrate the importance of an adequate selection of drilling tools and machining parameters to extend the life cycle of these laminates as a consequence of enhanced reliability.
Resumo:
Mestrado em Engenharia de Computação e Instrumentação Médica
Resumo:
This paper proposes an FPGA-based architecture for onboard hyperspectral unmixing. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral datasets. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.