991 resultados para Blind Identification
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
This paper introduces a new hyperspectral unmixing method called Dependent Component Analysis (DECA). This method decomposes a hyperspectral image into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA performance is illustrated using simulated and real data.
Resumo:
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.
Resumo:
The electric utilities have large revenue losses annually due to commercial losses, which are caused mainly by fraud on the part of consumers and faulty meters. Automatic detection of such losses where there is a complex problem, given the large number of consumers and the high cost of each inspection, not to mention the wear of the relationship between company and consumer. Given the above, this paper aims to briefly present some methodologies applied by utilities to identify consumer frauds.
Resumo:
Diagnostic and parasite characterization and identification studies were carried out in human patients with cutaneous leishmaniasis lesions in Santiago del Estero, Northern Province of Argentina. Diagnostic procedures were biopsies of lesions for smears and inoculations in hamster, needle aspirations of material from ulcers for "in vitro" cultures. Immunodiagnostic techniques applied were IFAT-IgG and Montenegro skin test. Primary isolation of eight stocks of leishmanial parasites was achieved from patients with active lesions. All stocks were biologically characterized by their behaviour in hamster, measurements of amastigote and promastigotes and growth "in vitro". Eight stocks were characterized and identified at species level by their reactivity to a cross-panel of sub-genus and specie-specific Monoclonal Antibodies through an Indirect Immunofluorescence technique and a Dot-ELISA. We conclude from the serodeme analysis of Argentina stocks that: stocks MHOM/AR/92/SE-1; SE-2; SE-4; SE-8; SE-8-I; SE-30; SE-34 and SE-36 are Leishmania (Viannia) braziliensis. Three Leishmania stocks (SE-1; SE-2 and SE-30) did not react with one highly specie-specific Monoclonal Antibody (Clone: B-18, Leishmania (Viannia) braziliensis marker) disclosing two serodeme group patterns. Five out of eight soluble extracts of leishmanial promastigotes were electrophoresed on thin-layer starch gels and examined for the enzyme MPI, Mannose Phosphate Isomerase; MDH, Malate Dehydrogenase; 6PGD, 6 Phosphogluconate Dehydrogenase; NH, Nucleoside Hydrolase, 2-deoxyinosinc as substrate; SOD, Superoxide Dismutase; GPI, Glucose Phosphate Isomerase and ES, Esterase. From the isoenzyme studies we concluded that stocks: MHOM/AR/92/SE-1; SE-2; SE-4; SE-8 and SE-8-I are isoenzymatically Leishmania (Viannia) braziliensis. We need to analyze more enzymes before assigning them to a braziliensis zymodeme.
Resumo:
With the objective of standardizing a Dot Enzyme-Linked Immunosorbent Assay (Dot-ELISA) to detect antigens of fecal bacterial enteropathogens, 250 children, aged under 36 months and of both sexes, were studied; of which 162 had acute gastroenteritis. The efficacy of a rapid screening assay for bacterial enteropathogens (enteropathogenic Escherichia coli "EPEC", enteroinvasive Escherichia coli "EIEC", Salmonella spp. and Shigella spp.) was evaluated. The fecal samples were also submitted to a traditional method of stool culture for comparison. The concordance index between the two techniques, calculated using the Kappa (k) index for the above mentioned bacterial strains was 0.8859, 0.9055, 0.7932 and 0.7829 respectively. These values express an almost perfect degree of concordance for the first two and substantial concordance for the latter two, thus enabling this technique to be applied in the early diagnosis of diarrhea in infants. With a view to increasing the sensitivity and specificity of this immunological test, a study was made of the antigenic preparations obtained from two types of treatment: 1) deproteinization by heating; 2) precipitation and concentration of the lipopolysaccharide antigen (LPS) using an ethanol-acetone solution, which was then heated in the presence of sodium EDTA
Resumo:
We show here a simplified RT-PCR for identification of dengue virus types 1 and 2. Five dengue virus strains, isolated from Brazilian patients, and yellow fever vaccine 17DD as a negative control, were used in this study. C6/36 cells were infected and supernatants were collected after 7 days. The RT-PCR, done in a single reaction vessel, was carried out following a 1/10 dilution of virus in distilled water or in a detergent mixture containing Nonidet P40. The 50 µl assay reaction mixture included 50 pmol of specific primers amplifying a 482 base pair sequence for dengue type 1 and 210 base pair sequence for dengue type 2. In other assays, we used dengue virus consensus primers having maximum sequence similarity to the four serotypes, amplifying a 511 base pair sequence. The reaction mixture also contained 0.1 mM of the four deoxynucleoside triphosphates, 7.5 U of reverse transcriptase, 1U of thermostable Taq DNA polymerase. The mixture was incubated for 5 minutes at 37ºC for reverse transcription followed by 30 cycles of two-step PCR amplification (92ºC for 60 seconds, 53ºC for 60 seconds) with slow temperature increment. The PCR products were subjected to 1.7% agarose gel electrophoresis and visualized by UV light after staining with ethidium bromide solution. Low virus titer around 10 3, 6 TCID50/ml was detected by RT-PCR for dengue type 1. Specific DNA amplification was observed with all the Brazilian dengue strains by using dengue virus consensus primers. As compared to other RT-PCRs, this assay is less laborious, done in a shorter time, and has reduced risk of contamination
Resumo:
Crude Toxoplasma gondii antigens represent raw material used to prepare reagents to be employed in different serologic tests for the diagnosis of toxoplasmosis, including the IgM and IgG indirect hemagglutination (IgG-HA and IgM-HA) tests. So far, the actual antigenic molecules of the parasite involved in the interaction with agglutinating anti-T. gondii antibodies in these tests are unknown. The absorption process of serum samples from toxoplasmosis patients with the IgG-HA reagent (G-toxo-HA) demonstrated that red cells from this reagent were coated with T. gondii antigens with Mr of 39, 35, 30, 27, 22 and 14 kDa. The immune-absorption process with the IgM-HA reagent (M-toxo-HA), in turn, provided antibody eluates which recognized antigenic bands of the parasite corresponding to Mr of 54, 35 and 30 kDa, implying that these antigens are coating red cells from this reagent. The identification of most relevant antigens for each type of HA reagent seems to be useful for the inspection of the raw antigenic material, as well as of reagent batches routinely produced. Moreover the present findings can be used to modify these reagents in order to improve the performance of HA tests for the diagnosis of toxoplasmosis
Resumo:
We present a case of prenatal diagnosis of congenital rubella. After birth, in addition to traditional serologic and clinical examinations to confirm the infection, we could identify the virus in the "first fluid aspirated from the oropharynx of the newborn", using polimerase chain reaction (PCR). We propose that this first oropharynx fluid (collected routinely immediately after birth) could be used as a source for identification of various congenital infection agents, which may not always be easily identified by current methods
Resumo:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e Computadores
Resumo:
We have developed a procedure for the rapid diagnosis of plague that also allows the identification of prominent virulence markers of Y. pestis strains. This procedure is based upon the use of a single polymerase chain reaction with multiple pairs of primers directed at genes present in the three virulence plasmids as well as in the chromosomal pathogenicity island of the bacterium. The technique allowed the discrimination of strains which lacked one or more of the known pathogenic loci, using as template total DNA obtained from bacterial cultures and from simulated blood cultures containing diluted concentration of bacteria. It also proved effective in confirming the disease in a blood culture from a plague suspected patient. As the results are obtained in a few hours this technique will be useful in the methodology of the Plague Control Program.
Resumo:
Treatment with indinavir has been shown to result in marked decreases in viral load and increases in CD4 cell counts in HIV-infected individuals. A randomized double-blind study to evaluate the efficacy of indinavir alone (800 mg q8h), zidovidine alone (200 mg q8h) or the combination was performed to evaluate progression to AIDS. 996 antiretroviral therapy-naive patients with CD4 cell counts of 50-250/mm3 were allocated to treatment. During the trial the protocol was amended to add lamivudine to the zidovudine-containing arms. The primary endpoint was time to development of an AIDS-defining illness or death. The study was terminated after a protocol-defined interim analysis demonstrated highly significant reductions in progression to a clinical event in the indinavir-containing arms, compared to the zidovudine arm (p<0.0001). Over a median follow-up of 52 weeks (up to 99 weeks), percent reductions in hazards for the indinavir plus zidovudine and indinavir groups compared to the zidovudine group were 70% and 61%, respectively. Significant reductions in HIV RNA and increases in CD4 cell counts were also seen in the indinavir-containing groups compared to the zidovudine group. Improvement in both CD4 cell count and HIV RNA were associated with reduced risk of disease progression. All three regimens were generally well tolerated.
Resumo:
Dissertation presented to obtain the Ph.D. degree in Biology at the Instituto de Tecnologia Química e Biológica, Universidade Nova de Lisboa.
Resumo:
A case-control study was conducted to identify risk factors for death from tetanus in the State of Pernambuco, Brazil. Information was obtained from medical records of 152 cases and 152 controls, admitted to the tetanus unit in the State University Hospital, in Recife, from 1990 to 1995. Variables were grouped in three different sets. Crude and adjusted odds ratios, p-values and 95% confidence intervals were estimated. Variables selected in the multivariate analysis in each set were controlled for the effect of those selected in the others. All factors related to the disease progression - incubation period, time elapsed between the occurrence of the first tetanus symptom and admission, and period of onset - showed a statistically significant association with death from tetanus. Similarly, signs and/or symptoms occurring on admission or in the following 24 hours (second set): reflex spasms, neck stiffness, respiratory signs/symptoms and respiratory failure requiring artificial ventilation (third set) were associated with death from tetanus even when adjusted for the effect of the others.