980 resultados para Identification algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral imaging sensors provide image data containing both spectral and spatial information from the Earth surface. The huge data volumes produced by these sensors put stringent requirements on communications, storage, and processing. This paper presents a method, termed hyperspectral signal subspace identification by minimum error (HySime), that infer the signal subspace and determines its dimensionality without any prior knowledge. The identification of this subspace enables a correct dimensionality reduction yielding gains in algorithm performance and complexity and in data storage. HySime method is unsupervised and fully-automatic, i.e., it does not depend on any tuning parameters. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Civil – Ramo Estruturas

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An IgG2a subclass monoclonal antibody, C6G9, was obtained by immunization of BALB/c mice with Schistosoma mansoni egg antigens. With this monoclonal antibody, it was possible to identify a schistosomular antigen with a molecular weight of 46 kilodaltons (KDa), and its expression being evaluated by means of indirect immunofluorescence. The antigen persisted in the integument of the developing schistosomulum, for at least 96 hours post-transformation. The monoclonal antibody also reacted with the cercaria surface, but not with that of adult worm. The C6G9 was also able to mediate significant levels of cytotoxicity in the presence of complement for newly transformed schistosomula.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fourteen year schistosomiasis control program in Peri-Peri (Capim Branco, MG) reduced prevalence from 43.5 to 4.4%; incidence from 19.0 to 2.9%, the geometric mean of the number of eggs from 281 to 87 and the level of the hepatoesplenic form cases from 5.9 to 0.0%. In 1991, three years after the interruption of the program, the prevalence had risen to 19.6%. The district consists of Barbosa (a rural area) and Peri-Peri itself (an urban area). In 1991, the prevalence in the two areas was 28.4% and 16.0% respectively. A multivariate analysis of risk factors for schistosomiasis indicated the domestic agricultural activity with population attributive risk (PAR) of 29.82%, the distance (< 10 m) from home to water source (PAR = 25.93%) and weekly fishing (PAR = 17.21%) as being responsible for infections in the rural area. The recommended control measures for this area are non-manual irrigation and removal of homes to more than ten meters from irrigation ditches. In the urban area, it was observed that swimming at weekly intervals (PAR = 20.71%), daily domestic agricultural activity (PAR = 4.07%) and the absence of drinking water in the home (PAR=4.29%) were responsible for infections. Thus, in the urban area the recommended control measures are the substitution of manual irrigation with an irrigation method that avoids contact with water, the creation of leisure options of the population and the provision of a domestic water supply. The authors call attention to the need for the efficacy of multivariate analysis of risk factors to be evaluated for schistosomiasis prior to its large scale use as a indicator of the control measures to be implemented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O surgir da World Wide Web providenciou aos utilizadores uma série de oportunidades no que diz respeito ao acesso a dados e informação. Este acesso tornou-se um ato banal para qualquer utilizador da Web, tanto pelo utilizador comum como por outros mais experientes, tanto para obter informações básicas, como outras informações mais complexas. Todo este avanço tecnológico permitiu que os utilizadores tivessem acesso a uma vasta quantidade de informação, dispersa pelo globo, não tendo, na maior parte das vezes, a informação qualquer tipo de ligação entre si. A necessidade de se obter informação de interesse relativamente a determinado tema, mas tendo que recorrer a diversas fontes para obter toda a informação que pretende obter e comparar, torna-se um processo moroso para o utilizador. Pretende-se que este processo de recolha de informação de páginas web seja o mais automatizado possível, dando ao utilizador a possibilidade de utilizar algoritmos e ferramentas de análise e processamento automáticas, reduzindo desta forma o tempo e esforço de realização de tarefas sobre páginas web. Este processo é denominado Web Scraping. Neste trabalho é descrita uma arquitetura de sistema de web scraping automático e configurável baseado em tecnologias existentes, nomeadamente no contexto da web semântica. Para tal o trabalho desenvolvido analisa os efeitos da aplicação do Web Scraping percorrendo os seguintes pontos: • Identificação e análise de diversas ferramentas de web scraping; • Identificação do processo desenvolvido pelo ser humano complementar às atuais ferramentas de web scraping; • Design duma arquitetura complementar às ferramentas de web scraping que dê apoio ao processo de web scraping do utilizador; • Desenvolvimento dum protótipo baseado em ferramentas e tecnologias existentes; • Realização de experiências no domínio de aplicação de páginas de super-mercados portugueses; • Analisar resultados obtidos a partir destas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A antropologia forense é uma disciplina das ciências forenses que trata da análise de restos cadavéricos humanos para fins legais. Uma das suas aplicações mais populares é a identificação forense que consiste em determinar o perfil biológico (idade, sexo, ancestralidade e estatura) de um indivíduo. No entanto, este processo muitas vezes é dificultado quando o corpo se encontra em avançado estado de decomposição apenas existindo restos esqueléticos. Neste caso, áreas médicas comummente utilizadas na identificação de cadáveres, como a patologia, tem de ser descartadas e surge a necessidade de aplicar outras técnicas. Neste contexto, muitos métodos antropométricos são propostos de forma a caracterizar uma pessoa através do seu esqueleto. Contudo, constata-se que a maioria dos procedimentos sugeridos é baseada em equipamentos básicos de medição, não usufruindo da tecnologia contemporânea. Assim, em parceria com a Delegação Norte do NMLCF, I. P., esta Tese teve na sua génese a criação de um sistema computacional baseado em imagens de Tomografia Computorizada (TC) de ossadas que, através de ferramentas open source, permita a realização de identificação forense. O trabalho apresentado baseia-se no processo de gestão de informação, aquisição, processamento e visualização de imagens TC. No decorrer da realização da presente Tese foi desenvolvida uma base de dados que permite organizar a informação de cada ossada e foram implementados algoritmos que levam a uma extracção de características muito mais vasta que a efetuada manualmente com os equipamentos de medição clássicos. O resultado final deste estudo consistiu num conjunto de técnicas que poderão ser englobadas num sistema computacional de identificação forense e deste modo criar uma aplicação com vantagens tecnológicas evidentes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electric utilities have large revenue losses annually due to commercial losses, which are caused mainly by fraud on the part of consumers and faulty meters. Automatic detection of such losses where there is a complex problem, given the large number of consumers and the high cost of each inspection, not to mention the wear of the relationship between company and consumer. Given the above, this paper aims to briefly present some methodologies applied by utilities to identify consumer frauds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diagnostic and parasite characterization and identification studies were carried out in human patients with cutaneous leishmaniasis lesions in Santiago del Estero, Northern Province of Argentina. Diagnostic procedures were biopsies of lesions for smears and inoculations in hamster, needle aspirations of material from ulcers for "in vitro" cultures. Immunodiagnostic techniques applied were IFAT-IgG and Montenegro skin test. Primary isolation of eight stocks of leishmanial parasites was achieved from patients with active lesions. All stocks were biologically characterized by their behaviour in hamster, measurements of amastigote and promastigotes and growth "in vitro". Eight stocks were characterized and identified at species level by their reactivity to a cross-panel of sub-genus and specie-specific Monoclonal Antibodies through an Indirect Immunofluorescence technique and a Dot-ELISA. We conclude from the serodeme analysis of Argentina stocks that: stocks MHOM/AR/92/SE-1; SE-2; SE-4; SE-8; SE-8-I; SE-30; SE-34 and SE-36 are Leishmania (Viannia) braziliensis. Three Leishmania stocks (SE-1; SE-2 and SE-30) did not react with one highly specie-specific Monoclonal Antibody (Clone: B-18, Leishmania (Viannia) braziliensis marker) disclosing two serodeme group patterns. Five out of eight soluble extracts of leishmanial promastigotes were electrophoresed on thin-layer starch gels and examined for the enzyme MPI, Mannose Phosphate Isomerase; MDH, Malate Dehydrogenase; 6PGD, 6 Phosphogluconate Dehydrogenase; NH, Nucleoside Hydrolase, 2-deoxyinosinc as substrate; SOD, Superoxide Dismutase; GPI, Glucose Phosphate Isomerase and ES, Esterase. From the isoenzyme studies we concluded that stocks: MHOM/AR/92/SE-1; SE-2; SE-4; SE-8 and SE-8-I are isoenzymatically Leishmania (Viannia) braziliensis. We need to analyze more enzymes before assigning them to a braziliensis zymodeme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the objective of standardizing a Dot Enzyme-Linked Immunosorbent Assay (Dot-ELISA) to detect antigens of fecal bacterial enteropathogens, 250 children, aged under 36 months and of both sexes, were studied; of which 162 had acute gastroenteritis. The efficacy of a rapid screening assay for bacterial enteropathogens (enteropathogenic Escherichia coli "EPEC", enteroinvasive Escherichia coli "EIEC", Salmonella spp. and Shigella spp.) was evaluated. The fecal samples were also submitted to a traditional method of stool culture for comparison. The concordance index between the two techniques, calculated using the Kappa (k) index for the above mentioned bacterial strains was 0.8859, 0.9055, 0.7932 and 0.7829 respectively. These values express an almost perfect degree of concordance for the first two and substantial concordance for the latter two, thus enabling this technique to be applied in the early diagnosis of diarrhea in infants. With a view to increasing the sensitivity and specificity of this immunological test, a study was made of the antigenic preparations obtained from two types of treatment: 1) deproteinization by heating; 2) precipitation and concentration of the lipopolysaccharide antigen (LPS) using an ethanol-acetone solution, which was then heated in the presence of sodium EDTA

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show here a simplified RT-PCR for identification of dengue virus types 1 and 2. Five dengue virus strains, isolated from Brazilian patients, and yellow fever vaccine 17DD as a negative control, were used in this study. C6/36 cells were infected and supernatants were collected after 7 days. The RT-PCR, done in a single reaction vessel, was carried out following a 1/10 dilution of virus in distilled water or in a detergent mixture containing Nonidet P40. The 50 µl assay reaction mixture included 50 pmol of specific primers amplifying a 482 base pair sequence for dengue type 1 and 210 base pair sequence for dengue type 2. In other assays, we used dengue virus consensus primers having maximum sequence similarity to the four serotypes, amplifying a 511 base pair sequence. The reaction mixture also contained 0.1 mM of the four deoxynucleoside triphosphates, 7.5 U of reverse transcriptase, 1U of thermostable Taq DNA polymerase. The mixture was incubated for 5 minutes at 37ºC for reverse transcription followed by 30 cycles of two-step PCR amplification (92ºC for 60 seconds, 53ºC for 60 seconds) with slow temperature increment. The PCR products were subjected to 1.7% agarose gel electrophoresis and visualized by UV light after staining with ethidium bromide solution. Low virus titer around 10 3, 6 TCID50/ml was detected by RT-PCR for dengue type 1. Specific DNA amplification was observed with all the Brazilian dengue strains by using dengue virus consensus primers. As compared to other RT-PCRs, this assay is less laborious, done in a shorter time, and has reduced risk of contamination

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Crude Toxoplasma gondii antigens represent raw material used to prepare reagents to be employed in different serologic tests for the diagnosis of toxoplasmosis, including the IgM and IgG indirect hemagglutination (IgG-HA and IgM-HA) tests. So far, the actual antigenic molecules of the parasite involved in the interaction with agglutinating anti-T. gondii antibodies in these tests are unknown. The absorption process of serum samples from toxoplasmosis patients with the IgG-HA reagent (G-toxo-HA) demonstrated that red cells from this reagent were coated with T. gondii antigens with Mr of 39, 35, 30, 27, 22 and 14 kDa. The immune-absorption process with the IgM-HA reagent (M-toxo-HA), in turn, provided antibody eluates which recognized antigenic bands of the parasite corresponding to Mr of 54, 35 and 30 kDa, implying that these antigens are coating red cells from this reagent. The identification of most relevant antigens for each type of HA reagent seems to be useful for the inspection of the raw antigenic material, as well as of reagent batches routinely produced. Moreover the present findings can be used to modify these reagents in order to improve the performance of HA tests for the diagnosis of toxoplasmosis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thesis submitted in the fulfillment of the requirements for the Degree of Master in Biomedical Engineering

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a case of prenatal diagnosis of congenital rubella. After birth, in addition to traditional serologic and clinical examinations to confirm the infection, we could identify the virus in the "first fluid aspirated from the oropharynx of the newborn", using polimerase chain reaction (PCR). We propose that this first oropharynx fluid (collected routinely immediately after birth) could be used as a source for identification of various congenital infection agents, which may not always be easily identified by current methods