973 resultados para processing engineering
Resumo:
Pectus excavatum is the most common deformity of the thorax. Pre-operative diagnosis usually includes Computed Tomography (CT) to successfully employ a thoracic prosthesis for anterior chest wall remodeling. Aiming at the elimination of radiation exposure, this paper presents a novel methodology for the replacement of CT by a 3D laser scanner (radiation-free) for prosthesis modeling. The complete elimination of CT is based on an accurate determination of ribs position and prosthesis placement region through skin surface points. The developed solution resorts to a normalized and combined outcome of an artificial neural network (ANN) set. Each ANN model was trained with data vectors from 165 male patients and using soft tissue thicknesses (STT) comprising information from the skin and rib cage (automatically determined by image processing algorithms). Tests revealed that ribs position for prosthesis placement and modeling can be estimated with an average error of 5.0 ± 3.6 mm. One also showed that the ANN performance can be improved by introducing a manually determined initial STT value in the ANN normalization procedure (average error of 2.82 ± 0.76 mm). Such error range is well below current prosthesis manual modeling (approximately 11 mm), which can provide a valuable and radiation-free procedure for prosthesis personalization.
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
The use of iris recognition for human authentication has been spreading in the past years. Daugman has proposed a method for iris recognition, composed by four stages: segmentation, normalization, feature extraction, and matching. In this paper we propose some modifications and extensions to Daugman's method to cope with noisy images. These modifications are proposed after a study of images of CASIA and UBIRIS databases. The major modification is on the computationally demanding segmentation stage, for which we propose a faster and equally accurate template matching approach. The extensions on the algorithm address the important issue of pre-processing that depends on the image database, being mandatory when we have a non infra-red camera, like a typical WebCam. For this scenario, we propose methods for reflection removal and pupil enhancement and isolation. The tests, carried out by our C# application on grayscale CASIA and UBIRIS images show that the template matching segmentation method is more accurate and faster than the previous one, for noisy images. The proposed algorithms are found to be efficient and necessary when we deal with non infra-red images and non uniform illumination.
Resumo:
A two terminal optically addressed image processing device based on two stacked sensing/switching p-i-n a-SiC:H diodes is presented. The charge packets are injected optically into the p-i-n sensing photodiode and confined at the illuminated regions changing locally the electrical field profile across the p-i-n switching diode. A red scanner is used for charge readout. The various design parameters and addressing architecture trade-offs are discussed. The influence on the transfer functions of an a-SiC:H sensing absorber optimized for red transmittance and blue collection or of a floating anode in between is analysed. Results show that the thin a-SiC:H sensing absorber confines the readout to the switching diode and filters the light allowing full colour detection at two appropriated voltages. When the floating anode is used the spectral response broadens, allowing B&W image recognition with improved light-to-dark sensitivity. A physical model supports the image and colour recognition process.
Resumo:
Nanofiltration process for the treatment/valorisation of cork processing wastewaters was studied. A DS-5 DK 20/40 (GE Water Technologies) nanofiltration membrane/module was used, having 2.09 m(2) of surface area. Hydraulic permeability was determined with pure water and the result was 5.2 L.h(-1).m(-2).bar(-1). The membrane presents a rejection of 51% and 99% for NaCl and MgSO4 salts, respectively. Two different types of regimes were used in the wastewaters filtration process, total recycling mode and concentration mode. The first filtration regime showed that the most favourable working transmembrane pressure was 7 bar working at 25 degrees C. For the concentration mode experiments it was observed a 30% decline of the permeate fluxes when a volumetric concentration factor of 5 was reached. The permeate COD, BOD5, colour and TOC rejection values remained well above the 90% value, which allows, therefore, the concentration of organic matter (namely the tannin fraction) in the concentrate stream that can be further used by other industries. The permeate characterization showed that it cannot be directly discharged to the environment as it does not fulfil the values of the Portuguese discharge legislation. However, the permeate stream can be recycled to the process (boiling tanks) as it presents no colour and low TOC (< 60 ppm) or if wastewater discharge is envisaged we have observed that the permeate biodegradability is higher than 0.5, which renders conventional wastewater treatments feasible.
Resumo:
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.
Resumo:
7th Mediterranean Conference on Information Systems, MCIS 2012, Guimaraes, Portugal, September 8-10, 2012, Proceedings Series: Lecture Notes in Business Information Processing, Vol. 129
Resumo:
Cork processing wastewater is an aqueous complex mixture of organic compounds that have been extracted from cork planks during the boiling process. These compounds, such as polysaccharides and polyphenols, have different biodegradability rates, which depend not only on the natureof the compound but also on the size of the compound. The aim of this study is to determine the biochemical oxygen demands (BOD) and biodegradationrate constants (k) for different cork wastewater fractions with different organic matter characteristics. These wastewater fractions were obtained using membrane separation processes, namely nanofiltration (NF) and ultrafiltration (UF). The nanofiltration and ultrafiltration membranes molecular weight cut-offs (MWCO) ranged from 0.125 to 91 kDa. The results obtained showed that the biodegradation rate constant for the cork processing wastewater was around 0.3 d(-1) and the k values for the permeates varied between 0.27-0.72 d(-1), being the lower values observed for permeates generated by the membranes with higher MWCO and the higher values observed for the permeates generated by the membranes with lower MWCO. These higher k values indicate that the biodegradable organic matter that is permeated by the membranes with tighter MWCO is more readily biodegradated.
Resumo:
A definition of medium voltage (MV) load diagrams was made, based on the data base knowledge discovery process. Clustering techniques were used as support for the agents of the electric power retail markets to obtain specific knowledge of their customers’ consumption habits. Each customer class resulting from the clustering operation is represented by its load diagram. The Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) were applied to an electricity consumption data from a utility client’s database in order to form the customer’s classes and to find a set of representative consumption patterns. The WEACS approach is a clustering ensemble combination approach that uses subsampling and that weights differently the partitions in the co-association matrix. As a complementary step to the WEACS approach, all the final data partitions produced by the different variations of the method are combined and the Ward Link algorithm is used to obtain the final data partition. Experiment results showed that WEACS approach led to better accuracy than many other clustering approaches. In this paper the WEACS approach separates better the customer’s population than Two-step clustering algorithm.
Resumo:
The characteristics of tunable wavelength filters based on a-SiC:H multilayered stacked pin cells are studied both theoretically and experimentally. The optical transducers were produced by PECVD and tested for a proper fine tuning of the cyan and yellow fluorescent proteins emission. The active device consists of a p-i'(a-SiC:H)-n/p-i(a-Si:H)-n heterostructures sandwiched between two transparent contacts. Experimental data on spectral response analysis, current-voltage characteristics and color and transmission rate discrimination are reported. Cyan and yellow fluorescent input channels were transmitted together, each one with a specific transmission rate and different intensities. The multiplexed optical signal was analyzed by reading out, under positive and negative applied voltages, the generated photocurrents. Results show that the optimized optical transducer has the capability of combining the transient fluorescent signals onto a single output signal without losing any specificity (color and intensity). It acts as a voltage controlled optical filter: when the applied voltages are chosen appropriately the transducer can select separately the cyan and yellow channel emissions (wavelength and frequency) and also to quantify their relative intensities. A theoretical analysis supported by a numerical simulation is presented.