968 resultados para IEEE 802.11i
Resumo:
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.
Resumo:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
Resumo:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Resumo:
International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany
Resumo:
— In the new learning environments, built width digital technologies, the need to promote quality of education resources, commonly known as Learning Objects, which can support formal and informal distance learning, emerge as one of the biggest challenge that educational institutions will have to face. Due to the fact that is expensive, the reuse and sharing became very important issue. This article presents a Learning Object Repository which aims to store, to disseminate and maintain accessible Learning Objects.
Resumo:
Microarray allow to monitoring simultaneously thousands of genes, where the abundance of the transcripts under a same experimental condition at the same time can be quantified. Among various available array technologies, double channel cDNA microarray experiments have arisen in numerous technical protocols associated to genomic studies, which is the focus of this work. Microarray experiments involve many steps and each one can affect the quality of raw data. Background correction and normalization are preprocessing techniques to clean and correct the raw data when undesirable fluctuations arise from technical factors. Several recent studies showed that there is no preprocessing strategy that outperforms others in all circumstances and thus it seems difficult to provide general recommendations. In this work, it is proposed to use exploratory techniques to visualize the effects of preprocessing methods on statistical analysis of cancer two-channel microarray data sets, where the cancer types (classes) are known. For selecting differential expressed genes the arrow plot was used and the graph of profiles resultant from the correspondence analysis for visualizing the results. It was used 6 background methods and 6 normalization methods, performing 36 pre-processing methods and it was analyzed in a published cDNA microarray database (Liver) available at http://genome-www5.stanford.edu/ which microarrays were already classified by cancer type. All statistical analyses were performed using the R statistical software.
Resumo:
Background: An asynchronous eLearning system was developed for radiographers in order to promote a better knowledge about senology and mammography. Objectives: to assess the learners’ satisfaction. Methods: Target population included radiographers and radiogr aphy students, in order to assess eLearning satisfaction according to different experience levels in breast imaging. Satisfaction was measured through a questionnaire developed especially for eLearning systems, using a seven - point Likert scale. Main topics related are content, interface, personalization and learning community. Results: Overall, 85% of learners were satisfied with the course and 87,5% considered that the course is successful. Main areas that were evaluated by most learners in a positive way were interface and content (between six and seven - point); on the other hand, learning community presented a wider distribution of answers . Conclusions: The course provides an overall high degree of learner satisfaction, thus providing more effective knowle dge gain on breast imaging for radiographers.
Resumo:
Conferência: 2nd Experiment at International Conference - 18-20 September 2013
Resumo:
Conferência: IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors (ASAP)- Jun 05-07, 2013
Resumo:
Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON) - NOV 10-14, 2013
Resumo:
Conferência: 2nd Experiment at International Conference (Exp at)- Univ Coimbra, Coimbra, Portugal - Sep 18-20, 2013
Resumo:
Conferência - 16th International Symposium on Wireless Personal Multimedia Communications (WPMC)- Jun 24-27, 2013
Resumo:
In the last decade, local image features have been widely used in robot visual localization. To assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image to those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, we compare several candidate combiners with respect to their performance in the visual localization task. A deeper insight into the potential of the sum and product combiners is provided by testing two extensions of these algebraic rules: threshold and weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance. The voting method, whilst competitive to the algebraic rules in their standard form, is shown to be outperformed by both their modified versions.
Resumo:
Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON), Vienna, Austria, Nov 10-14, 2013
Resumo:
It is commonly accepted that the educational environment has been undergoing considerable change due to the use of the Information and Communication tools. But learning depends upon actions such as experimenting, visualizing and demonstrating through which the learner succeeds in constructing his own knowledge. Although it is not easy to achieve these actions through current ICT supported learning approaches, Role Playing Games (RPG) may well develop such capacities. The creation of an interactive computer game with RPG characteristics, about the 500th anniversary of the city of Funchal, the capital of Madeira Island, is invested with compelling educational/pedagogical implications, aiming clearly at teaching history and social relations through playing. Players interpret different characters in different settings/scenarios, experiencing adventures, meeting challenges and trying to reach multiple and simultaneous goals in the areas of education, entertainment and social integration along the first 150 years of the history of Funchal. Through this process they will live and understand all the social and historical factors of that epoch.