974 resultados para Acoustic signal classification
Resumo:
Hyperspectral imaging sensors provide image data containing both spectral and spatial information from the Earth surface. The huge data volumes produced by these sensors put stringent requirements on communications, storage, and processing. This paper presents a method, termed hyperspectral signal subspace identification by minimum error (HySime), that infer the signal subspace and determines its dimensionality without any prior knowledge. The identification of this subspace enables a correct dimensionality reduction yielding gains in algorithm performance and complexity and in data storage. HySime method is unsupervised and fully-automatic, i.e., it does not depend on any tuning parameters. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.
Resumo:
Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering by the Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Thesis submitted in the fulfillment of the requirements for the Degree of Master in Biomedical Engineering
Resumo:
Dissertation presented to obtain the Ph.D degree in Biology by Universidade Nova de Lisboa, Instituto de Tecnologia Química e Biológica, Instituto Gulbenkian de Ciência.
Resumo:
O presente relatório desenvolvido no âmbito do estágio curricular realizado na empresa – ASL & Associados, com duração de seis meses, tem como objetivo a elaboração e desenvolvimento de estudos acústicos de edifícios e elaboração de certificados energéticos de edifícios existentes. Inicialmente, será feita uma descrição da empresa acolhedora expondo a sua missão, valores e o portfólio abarcando projetos e revisão de projetos, certificação energética, ensaios acústicos, gestão e fiscalização de empreendimentos. As atividades desenvolvidas no estágio englobam a certificação energética de edifícios existentes aplicando a nova regulamentação térmica (REH), apresentando as simplificações de cálculo aplicadas a edifícios existentes e dois exemplos específicos de uma fração autónoma e um edifício unifamiliar. No campo da acústica foram estudados três edifícios, um habitacional, um misto e um de serviços. Para o condicionamento acústico dos casos de estudo serão aplicados métodos de cálculo utilizados na empresa e um método de cálculo para avaliação e classificação da qualidade acústica, o qual foi desenvolvido pelo Laboratório Nacional de Engenharia Civil – LNEC. Por fim, foram elaborados as ponderações sobre a experiência vivida, como considerações sobre os resultados obtidos tendo em conta os objetivos propostos para o desenvolvimento do estágio.
Resumo:
Ao longo desta dissertação é apresentada a legislação em vigor em Portugal relacionada com a acústica de edifícios, bem como as normas europeias existentes. Como caso de estudo optou-se por analisar um edifício recuperado no âmbito do programa de reabilitação urbana do Porto, incidindo o estudo experimental na realização de ensaios acústicos, avaliando e validando através dos mesmos as soluções construtivas preconizadas no projeto de execução do edifício. Assim, para cada uma das soluções construtivas, realizaram-se estimativas de acordo com diferentes métodos preconizados nas normas e na bibliografia da especialidade. Efetuou-se também a avaliação acústica do edifício através do método prescrito pelo Laboratório de Engenharia Civil (LNEC), a qual reverteu numa classificação de acordo com os parâmetros considerados, a qual é apresentada no presente trabalho.
Resumo:
Underwater acoustic networks can be quite effective to establish communication links between autonomous underwater vehicles (AUVs) and other vehicles or control units, enabling complex vehicle applications and control scenarios. A communications and control framework to support the use of underwater acoustic networks and sample application scenarios are described for single and multi-AUV operation.
Resumo:
This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. The Fourier Transform of eighteen different signals are calculated and approximated by trendlines based on a power law formula. A sensor classification scheme based on the frequency spectrum behavior is presented.
Resumo:
The Electromyography (EMG) is an important tool for gait analyzes and disorders diagnoses. Traditional methods involve equipment that can disturb the analyses, being gradually substituted by different approaches, like wearable and wireless systems. The cable replacement for autonomous systems demands for technologies capable of meeting the power constraints. This work presents the development of an EMG and kinematic data capture wireless module, designed taking into account power consumption issues. This module captures and converts the analog myoeletric signal to digital, synchronously with the capture of kinetic information. Both data are time multiplexed and sent to a PC via Bluetooth link. The work carried out comprised the development of the hardware, the firmware and a graphical interface running in an external PC. The hardware was developed using the PIC18F14K22, a low power family of microcontrollers. The link was established via Bluetooth, a protocol designed for low power communication. An application was also developed to recover and trace the signal to a Graphic User Interface (GUI), coordinating the message exchange with the firmware. Results were obtained which allowed validating the conceived system in static and with the subject performing short movements. Although it was not possible to perform the tests within more dynamic movements, it is shown that it is possible to capture, transmit and display the captured data as expected. Some suggestions to improve the system performance also were made.
Resumo:
Acute renal failure (ARF) is common after orthotopic liver transplantation (OLT). The aim of this study was to evaluate the prognostic value of RIFLE classification in the development of CKD, hemodialysis requirement, and mortality. Patients were categorized as risk (R), injury (I) or failure (F) according to renal function at day 1, 7 and 21. Final renal function was classified according to K/DIGO guidelines. We studied 708 OLT recipients, transplanted between September 1992 and March 2007; mean age 44 +/- 12.6 yr, mean follow-up 3.6 yr (28.8% > or = 5 yr). Renal dysfunction before OLT was known in 21.6%. According to the RIFLE classification, ARF occurred in 33.2%: 16.8% were R class, 8.5% I class and 7.9% F class. CKD developed in 45.6%, with stages 4 or 5d in 11.3%. Mortality for R, I and F classes were, respectively, 10.9%, 13.3% and 39.3%. Severity of ARF correlated with development of CKD: stage 3 was associated with all classes of ARF, stages 4 and 5d only with severe ARF. Hemodialysis requirement (23%) and mortality were only correlated with the most severe form of ARF (F class). In conclusion, RIFLE classification is a useful tool to stratify the severity of early ARF providing a prognostic indicator for the risk of CKD occurrence and death.
Resumo:
In this paper we study several natural and man-made complex phenomena in the perspective of dynamical systems. For each class of phenomena, the system outputs are time-series records obtained in identical conditions. The time-series are viewed as manifestations of the system behavior and are processed for analyzing the system dynamics. First, we use the Fourier transform to process the data and we approximate the amplitude spectra by means of power law functions. We interpret the power law parameters as a phenomenological signature of the system dynamics. Second, we adopt the techniques of non-hierarchical clustering and multidimensional scaling to visualize hidden relationships between the complex phenomena. Third, we propose a vector field based analogy to interpret the patterns unveiled by the PL parameters.