955 resultados para Random Subspace Method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The choice of an information systems is a critical factor of success in an organization's performance, since, by involving multiple decision-makers, with often conflicting objectives, several alternatives with aggressive marketing, makes it particularly complex by the scope of a consensus. The main objective of this work is to make the analysis and selection of a information system to support the school management, pedagogical and administrative components, using a multicriteria decision aid system – MMASSITI – Multicriteria Method- ology to Support the Selection of Information Systems/Information Technologies – integrates a multicriteria model that seeks to provide a systematic approach in the process of choice of Information Systems, able to produce sustained recommendations concerning the decision scope. Its application to a case study has identi- fied the relevant factors in the selection process of school educational and management information system and get a solution that allows the decision maker’ to compare the quality of the various alternatives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many Hyperspectral imagery applications require a response in real time or near-real time. To meet this requirement this paper proposes a parallel unmixing method developed for graphics processing units (GPU). This method is based on the vertex component analysis (VCA), which is a geometrical based method highly parallelizable. VCA is a very fast and accurate method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Experimental results obtained for simulated and real hyperspectral datasets reveal considerable acceleration factors, up to 24 times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we sought to assess the applicability of GC–MS/MS for the identification and quantification of 36 pesticides in strawberry from integrated pest management (IPM) and organic farming (OF). Citrate versions of QuEChERS (quick, easy, cheap, effective, rugged and safe) using dispersive solid-phase extraction (d-SPE) and disposable pipette extraction (DPX) for cleanup were compared for pesticide extraction. For cleanup, a combination of MgSO4, primary secondary amine and C18 was used for both the versions. Significant differences were observed in recovery results between the two sample preparation versions (DPX and d-SPE). Overall, 86% of the pesticides achieved recoveries (three spiking levels 10, 50 and 200 µg/kg) in the range of 70–120%, with <13% RSD. The matrix effects were also evaluated in both the versions and in strawberries from different crop types. Although not evidencing significant differences between the two methodologies were observed, however, the DPX cleanup proved to be a faster technique and easy to execute. The results indicate that QuEChERS with d-SPE and DPX and GC–MS/MS analysis achieved reliable quantification and identification of 36 pesticide residues in strawberries from OF and IPM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple method of rubella antigen production by treatment with sodium desoxycholate for use in enzyme immunoassay (IMT-ELISA) is presented. When this assay was compared with a commercial test (Enzygnost-Rubella, Behring), in the study of 108 sera and 118 filter paper blood samples, 96.9% (219/226) overall agreement and correlation coefficient of 0.90 between absorbances were observed. Seven samples showed discordant results, negative by the commercial kit and positive by our test. Four of those 7 samples were available, being 3 positive by HI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

More than ever, there is an increase of the number of decision support methods and computer aided diagnostic systems applied to various areas of medicine. In breast cancer research, many works have been done in order to reduce false-positives when used as a double reading method. In this study, we aimed to present a set of data mining techniques that were applied to approach a decision support system in the area of breast cancer diagnosis. This method is geared to assist clinical practice in identifying mammographic findings such as microcalcifications, masses and even normal tissues, in order to avoid misdiagnosis. In this work a reliable database was used, with 410 images from about 115 patients, containing previous reviews performed by radiologists as microcalcifications, masses and also normal tissue findings. Throughout this work, two feature extraction techniques were used: the gray level co-occurrence matrix and the gray level run length matrix. For classification purposes, we considered various scenarios according to different distinct patterns of injuries and several classifiers in order to distinguish the best performance in each case described. The many classifiers used were Naïve Bayes, Support Vector Machines, k-nearest Neighbors and Decision Trees (J48 and Random Forests). The results in distinguishing mammographic findings revealed great percentages of PPV and very good accuracy values. Furthermore, it also presented other related results of classification of breast density and BI-RADS® scale. The best predictive method found for all tested groups was the Random Forest classifier, and the best performance has been achieved through the distinction of microcalcifications. The conclusions based on the several tested scenarios represent a new perspective in breast cancer diagnosis using data mining techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Faeces of 138 chickens were inoculated on Blaser agar plates. One set of plates was incubated in jars with CampyPak envelopes. The others were incubated in "Zip-lock" plastic bags (7 x X in.) and a microaerophilic atmosphere was generated exhaling into the "Zip-lock" plastic bag, after holding the breath for 20 sec. Then, the bag was pressed to evacuate its atmosphere, inflated again, and pressed (4 times), and finally sealed. Campylobacter was isolated from 127 (96.2%) of samples incubated in jars with gas generator envelopes and from 129 (98%) of the specimens incubated into the bags. The proposed methodology offers good savings for cost-conscious laboratories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prevalence of Strongyloides stercoralis infection in three areas of Brazil was surveyed by a recently developed faecal culture method (an agar plate culture). The Strongyloides infection was confirmed in 11.3% of 432 subjects examined. The diagnostic efficacy of the agar plate culture was as high as 93.9% compared to only 28.5% and 26.5% by the Harada-Mori filter paper culture and faecal concentration methods, when faecal samples were examined simultaneously by these three methods. Among the 49 positive samples, about 60% were confirmed to be positive only by the agar plate culture. These results indicate that the agar plate culture is a sensitive new tool for the correct diagnosis of chronic Strongyloides infection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A single and practical method to slain Malassezia furfur and Corynebacterium minutissimum in lesions' scales is described. The scales are collected by pressing small pieces of scotch tape (about 4 cm lenght and 2 cm width) onto the lesions and following withdrawl the furfuraceous scales will remain on the glue side. These pieces are then immersed for some minutes in lactophenol-cotton blue stain. Following absorption of the stain the scales are washed in current water to remove the excess of blue stain, dried with filter paper, dehydrated via passage in two bottles containing absolute alcohol and then placed in xylene in a centrifugation tube. The xylene dissolves the scotch tape glue and the scales fall free in the tube. After centrifugation and decantation the scales concentrated on the bottom of the tube are collected with a platinum-loop, placed in Canada balsam on a microscopy slide and closed with a cover slip. The preparations are then ready to be submitted to microscopic examination. Other stains may also be used instead of lactophenol-cotton blue. This method is simple, easily performed, and offers good conditions to study these fungi as well as being useful for the diagnosis of the diseases that they cause.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sustainable Construction, Materials and Practice, p. 426-432

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show here a simplified RT-PCR for identification of dengue virus types 1 and 2. Five dengue virus strains, isolated from Brazilian patients, and yellow fever vaccine 17DD as a negative control, were used in this study. C6/36 cells were infected and supernatants were collected after 7 days. The RT-PCR, done in a single reaction vessel, was carried out following a 1/10 dilution of virus in distilled water or in a detergent mixture containing Nonidet P40. The 50 µl assay reaction mixture included 50 pmol of specific primers amplifying a 482 base pair sequence for dengue type 1 and 210 base pair sequence for dengue type 2. In other assays, we used dengue virus consensus primers having maximum sequence similarity to the four serotypes, amplifying a 511 base pair sequence. The reaction mixture also contained 0.1 mM of the four deoxynucleoside triphosphates, 7.5 U of reverse transcriptase, 1U of thermostable Taq DNA polymerase. The mixture was incubated for 5 minutes at 37ºC for reverse transcription followed by 30 cycles of two-step PCR amplification (92ºC for 60 seconds, 53ºC for 60 seconds) with slow temperature increment. The PCR products were subjected to 1.7% agarose gel electrophoresis and visualized by UV light after staining with ethidium bromide solution. Low virus titer around 10 3, 6 TCID50/ml was detected by RT-PCR for dengue type 1. Specific DNA amplification was observed with all the Brazilian dengue strains by using dengue virus consensus primers. As compared to other RT-PCRs, this assay is less laborious, done in a shorter time, and has reduced risk of contamination

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The phlebotomine sand fly Lutzomyia longipalpis has been incriminated as a vector of American visceral leishmaniasis, caused by Leishmania chagasi. However, some evidence has been accumulated suggesting that it may exist in nature not as a single but as a species complex. Our goal was to compare four laboratory reference populations of L. longipalpis from distinct geographic regions at the molecular level by RAPD-PCR. We screened genomic DNA for polymorphic sites by PCR amplification with decamer single primers of arbitrary nucleotide sequences. One primer distinguished one population (Marajó Island, Pará State, Brazil) from the other three (Lapinha Cave, Minas Gerais State, Brazil; Melgar, Tolima Department, Colombia and Liberia, Guanacaste Province, Costa Rica). The population-specific and the conserved RAPD-PCR amplified fragments were cloned and shown to differ only in number of internal repeats.