7 resultados para Processing of images
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The use of iris recognition for human authentication has been spreading in the past years. Daugman has proposed a method for iris recognition, composed by four stages: segmentation, normalization, feature extraction, and matching. In this paper we propose some modifications and extensions to Daugman's method to cope with noisy images. These modifications are proposed after a study of images of CASIA and UBIRIS databases. The major modification is on the computationally demanding segmentation stage, for which we propose a faster and equally accurate template matching approach. The extensions on the algorithm address the important issue of pre-processing that depends on the image database, being mandatory when we have a non infra-red camera, like a typical WebCam. For this scenario, we propose methods for reflection removal and pupil enhancement and isolation. The tests, carried out by our C# application on grayscale CASIA and UBIRIS images show that the template matching segmentation method is more accurate and faster than the previous one, for noisy images. The proposed algorithms are found to be efficient and necessary when we deal with non infra-red images and non uniform illumination.
Resumo:
The discovery of X-rays was undoubtedly one of the greatest stimulus for improving the efficiency in the provision of healthcare services. The ability to view, non-invasively, inside the human body has greatly facilitated the work of professionals in diagnosis of diseases. The exclusive focus on image quality (IQ), without understanding how they are obtained, affect negatively the efficiency in diagnostic radiology. The equilibrium between the benefits and the risks are often forgotten. It is necessary to adopt optimization strategies to maximize the benefits (image quality) and minimize risk (dose to the patient) in radiological facilities. In radiology, the implementation of optimization strategies involves an understanding of images acquisition process. When a radiographer adopts a certain value of a parameter (tube potential [kVp], tube current-exposure time product [mAs] or additional filtration), it is essential to know its meaning and impact of their variation in dose and image quality. Without this, any optimization strategy will be a failure. Worldwide, data show that use of x-rays has been increasingly frequent. In Cabo Verde, we note an effort by healthcare institutions (e.g. Ministry of Health) in equipping radiological facilities and the recent installation of a telemedicine system requires purchase of new radiological equipment. In addition, the transition from screen-films to digital systems is characterized by a raise in patient exposure. Given that this transition is slower in less developed countries, as is the case of Cabo Verde, the need to adopt optimization strategies becomes increasingly necessary. This study was conducted as an attempt to answer that need. Although this work is about objective evaluation of image quality, and in medical practice the evaluation is usually subjective (visual evaluation of images by radiographer / radiologist), studies reported a correlation between these two types of evaluation (objective and subjective) [5-7] which accredits for conducting such studies. The purpose of this study is to evaluate the effect of exposure parameters (kVp and mAs) when using additional Cooper (Cu) filtration in dose and image quality in a Computed Radiography system.
Resumo:
Introduction: The cellblock is a technique that enables the pathologist to study the morphological detail of residual samples and can be used when it is necessary to perform additional diagnostic techniques. Objective: Demonstrate the processing of bronchial washings in liquid based cytology to cellblock using HistoGel in residual samples, evaluating the morphology and preservation of cytological material. Methods: There were used 40 residual samples from bronchial washings in liquid based cytology, after determination of the clinical diagnosis, being made subsequently 40 cellblocks using HistoGel. For each cellblock there was made one histological section for analysis of cell morphology, which was subsequently stained with the routine histological staining. After microscope observation, the morphology was evaluated by 3 experts in the field of pathology, based on the parameters: Cellularity, Preservation and Background. Results: The average final score of 3 evaluators, on a scale of 0 to 100, in assessing the morphology of the 40 samples was 55.6. From the 40 histological sections, 5 of them were considered not viable for evaluation. Conclusions: The results obtained indicate median quality maintenance of morphology. However, it is noted that in only 5 cases it was not possible to determine an evaluation, knowing from the outset that these are residual samples with a very scant cellularity. Thus, it is possible to say that the processing of bronchial washings to cellblock using HistoGel contributes to a concentration of the cytological material, allowing its evaluation and subsequent diagnosis. Additional diagnostic techniques are shown equally viable in these cellblocks.
Resumo:
Naturally Occurring Radioactive Materials (NORM) are materials that are found naturally in the environment and contain radioactive isotopes that can cause negative effects on the health of workers who manipulate them. Present in underground work like mining and tunnel construction in granite zones, these materials are difficult to identify and characterize without appropriate equipment for risk evaluation. The assessing methods were exemplified with a case study applied to the handling and processing of phosphoric rock where one found significant amounts of radioactive isotopes and consequently elevated radon concentrations in enclosed spaces containing these materials. © 2015 Taylor & Francis Group, London.
Resumo:
Amorphous and crystalline sputtered boron carbide thin films have a very high hardness even surpassing that of bulk crystalline boron carbide (≈41 GPa). However, magnetron sputtered B-C films have high friction coefficients (C.o.F) which limit their industrial application. Nanopatterning of materials surfaces has been proposed as a solution to decrease the C.o.F. The contact area of the nanopatterned surfaces is decreased due to the nanometre size of the asperities which results in a significant reduction of adhesion and friction. In the present work, the surface of amorphous and polycrystalline B-C thin films deposited by magnetron sputtering was nanopatterned using infrared femtosecond laser radiation. Successive parallel laser tracks 10 μm apart were overlapped in order to obtain a processed area of about 3 mm2. Sinusoidal-like undulations with the same spatial period as the laser tracks were formed on the surface of the amorphous boron carbide films after laser processing. The undulations amplitude increases with increasing laser fluence. The formation of undulations with a 10 μm period was also observed on the surface of the crystalline boron carbide film processed with a pulse energy of 72 μJ. The amplitude of the undulations is about 10 times higher than in the amorphous films processed at the same pulse energy due to the higher roughness of the films and consequent increase in laser radiation absorption. LIPSS formation on the surface of the films was achieved for the three B-C films under study. However, LIPSS are formed under different circumstances. Processing of the amorphous films at low fluence (72 μJ) results in LIPSS formation only on localized spots on the film surface. LIPSS formation was also observed on the top of the undulations formed after laser processing with 78 μJ of the amorphous film deposited at 800 °C. Finally, large-area homogeneous LIPSS coverage of the boron carbide crystalline films surface was achieved within a large range of laser fluences although holes are also formed at higher laser fluences.
Resumo:
Since long ago cellulosic lyotropic liquid crystals were thought as potential materials to produce fibers competitive with spidersilk or Kevlar, yet the processing of high modulus materials from cellulose-based precursors was hampered by their complex rheological behavior. In this work, by using the Rheo-NMR technique, which combines deuterium NMR with rheology, we investigate the high shear rate regimes that may be of interest to the industrial processing of these materials. Whereas the low shear rate regimes were already investigated by this technique in different works [1-4], the high shear rates range is still lacking a detailed study. This work focuses on the orientational order in the system both under shear and subsequent relaxation process arising after shear cessation through the analysis of deuterium spectra from the deuterated solvent water. At the analyzed shear rates the cholesteric order is suppressed and a flow-aligned nematic is observed which for the higher shear rates develops after certain time periodic perturbations that transiently annihilate the order in the system. During relaxation the flow aligned nematic starts losing order due to the onset of the cholesteric helices leading to a period of very low order where cholesteric helices with different orientations are forming from the aligned nematic, followed in the final stage by an increase in order at long relaxation times corresponding to the development of aligned cholesteric domains. This study sheds light on the complex rheological behavior of chiral nematic cellulose-based systems and opens ways to improve its processing. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.