957 resultados para linear prediction signal subspace fitting
Resumo:
A construction technique of finite point constellations in n-dimensional spaces from ideals in rings of algebraic integers is described. An algorithm is presented to find constellations with minimum average energy from a given lattice. For comparison, a numerical table of lattice constellations and group codes is computed for spaces of dimension two, three, and four. © 2001.
Resumo:
Structural health monitoring (SHM) is related to the ability of monitoring the state and deciding the level of damage or deterioration within aerospace, civil and mechanical systems. In this sense, this paper deals with the application of a two-step auto-regressive and auto-regressive with exogenous inputs (AR-ARX) model for linear prediction of damage diagnosis in structural systems. This damage detection algorithm is based on the. monitoring of residual error as damage-sensitive indexes, obtained through vibration response measurements. In complex structures there are. many positions under observation and a large amount of data to be handed, making difficult the visualization of the signals. This paper also investigates data compression by using principal component analysis. In order to establish a threshold value, a fuzzy c-means clustering is taken to quantify the damage-sensitive index in an unsupervised learning mode. Tests are made in a benchmark problem, as proposed by IASC-ASCE with different damage patterns. The diagnosis that was obtained showed high correlation with the actual integrity state of the structure. Copyright © 2007 by ABCM.
Resumo:
Rubber production in the rubber tree [Hevea brasiliensis (Willd. ex Adr. de Juss.) Muell. Arg.] can be expressed differently in different environments. Thus the objective of the present study was to select productive progenies, stable and responsive in time and among locations. Thirty progenies were assessed by early yield tests at three ages and in three locations. A randomized block design was used with three replications and ten plants per plot, in 3 × 3 m spacing. The procedure of the mixed linear Reml/Blup model-restricted maximum likelihood/best non-biased linear prediction was used in the genetic statistical analyses. In all the individual analyses, the values observed for the progeny average heritability (ĥpa 2) were greater than those of the additive effect based on single individuals (ĥa 2) and within plot additive (ĥad 2). In the joint analyses in time, there was genotype × test interaction in the three locations. When 20 % of the best progenies were selected the predicted genetic gains were: Colina GG = 24.63 %, Selvíria GG = 13.63 %, and Votuporanga GG = 25.39 %. Two progenies were among the best in the analyses in the time and between locations. In the joint analysis among locations there was only genotype × location interaction in the first early test. In this test, selecting 20 %, the general predicted genetic gain was GG = 25.10 %. Identifying progenies with high and stable yield over time and among locations contributes to the efficiency of the genetic breeding program. The relative performance of the progenies varies depending of the age of early selection test. © 2013 Springer Science+Business Media Dordrecht.
Resumo:
Pós-graduação em Ciência Florestal - FCA
Resumo:
Este trabalho trata de alguns modelos de propagação de ondas eletromagnéticas. Primeiramente, foram analisados modelos relacionados com a predição do sinal eletromagnético em ambientes indoor. Os modelos utilizados neste trabalho foram o Traçado de Raios, Caminho Dominante de Energia (DPM) e o FDTD. Para os dois primeiros modelos foi, utilizado um software comercial e para o método FDTD foi desenvolvido um algoritmo para o qual o sinal é analisado em um ambiente com a mesma geometria utilizada no software. Os resultados, para os pontos de recepção analisados, fornecidos pelos três modelos, são concordantes. Verifica-se a influência dos fenômenos de propagação na intensidade do sinal. A relevância deste trabalho encontra-se no fato de não haver, na literatura pesquisada, trabalhos que comparassem os três modelos de predição mencionados, além de propor temas para pesquisas futuras.
Resumo:
There is an increase in the use of multi-pulse, rectifier-fed motor-drive equipment on board more-electric aircraft. Motor drives with feedback control appear as constant power loads to the rectifiers, which can cause instability of the DC filter capacitor voltage at the output of the rectifier. This problem can be exacerbated by interactions between rectifiers that share a common source impedance. In order that such a system can be analysed, there is a need for average, dynamic models of systems of rectifiers. In this study, an efficient, compact method for deriving the approximate, linear, large-signal, average models of two heterogeneous systems of rectifiers, which are fed from a common source impedance, is presented. The models give insight into significant interaction effects that occur between the converters, and that arise through the shared source impedance. First, a 6-pulse and doubly wound, transformer-fed, 12-pulse rectifier system is considered, followed by a 6-pulse and autotransformer-fed, 12-pulse rectifier system. The system models are validated against detailed simulations and laboratory prototypes, and key characteristics of the two system types are compared.
Resumo:
Pattern recognition methods have been successfully applied in several functional neuroimaging studies. These methods can be used to infer cognitive states, so-called brain decoding. Using such approaches, it is possible to predict the mental state of a subject or a stimulus class by analyzing the spatial distribution of neural responses. In addition it is possible to identify the regions of the brain containing the information that underlies the classification. The Support Vector Machine (SVM) is one of the most popular methods used to carry out this type of analysis. The aim of the current study is the evaluation of SVM and Maximum uncertainty Linear Discrimination Analysis (MLDA) in extracting the voxels containing discriminative information for the prediction of mental states. The comparison has been carried out using fMRI data from 41 healthy control subjects who participated in two experiments, one involving visual-auditory stimulation and the other based on bimanual fingertapping sequences. The results suggest that MLDA uses significantly more voxels containing discriminative information (related to different experimental conditions) to classify the data. On the other hand, SVM is more parsimonious and uses less voxels to achieve similar classification accuracies. In conclusion, MLDA is mostly focused on extracting all discriminative information available, while SVM extracts the information which is sufficient for classification. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Motivation: A major issue in cell biology today is how distinct intracellular regions of the cell, like the Golgi Apparatus, maintain their unique composition of proteins and lipids. The cell differentially separates Golgi resident proteins from proteins that move through the organelle to other subcellular destinations. We set out to determine if we could distinguish these two types of transmembrane proteins using computational approaches. Results: A new method has been developed to predict Golgi membrane proteins based on their transmembrane domains. To establish the prediction procedure, we took the hydrophobicity values and frequencies of different residues within the transmembrane domains into consideration. A simple linear discriminant function was developed with a small number of parameters derived from a dataset of Type II transmembrane proteins of known localization. This can discriminate between proteins destined for Golgi apparatus or other locations (post-Golgi) with a success rate of 89.3% or 85.2%, respectively on our redundancy-reduced data sets.
Resumo:
Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
We study preconditioning techniques for discontinuous Galerkin discretizations of isotropic linear elasticity problems in primal (displacement) formulation. We propose subspace correction methods based on a splitting of the vector valued piecewise linear discontinuous finite element space, that are optimal with respect to the mesh size and the Lamé parameters. The pure displacement, the mixed and the traction free problems are discussed in detail. We present a convergence analysis of the proposed preconditioners and include numerical examples that validate the theory and assess the performance of the preconditioners.
Resumo:
Purpose: Pulmonary hypoplasia is a determinant parameter for extra-uterine life. In the last years, MRI appears as a complement to US in order to evaluate the degree of pulmonary hypoplasia in foetuses with congenital anomalies, by using different methods - fetal lung volumetry (FLV), lung-to-liver signal intensity ratio (LLSIR)-. But until now, information about the correlation between the MRI prediction and the real postnatal outcome is limited. Methods and materials: We retrospectively reviewed the fetal MRI performed at our Institution in the last 8 years and selected the cases with suspicion of fetal pulmonary hypoplasia (n = 30). The pulmonary volumetry data of these foetuses were collected and the lung-to-liver signal intensity ratio (LLSIR) measures performed. These data were compared with those obtained from a control group of 25 foetuses considered as normal at MRI. The data of the study group were also correlated with the autopsy records or the post-natal clinical information of the patients. Results: As expected, the control group showed higher FLV and LLSIR values than the problem group at all gestational ages. Higher values of FLV and LLSIR were associated with a better post-natal outcome. Sensitivity, specificity, positive and negative predictive values and accuracy for the relative LLSIR and the relative FLV showed no significant differences. Conclusion: Our data show that not only the FLV but also the relative LLSIR inform about the degree of fetal lung development. This information may help to predict the fetal outcome and to evaluate the need for neonatal intensive care.
Resumo:
Affiliation: Institut de recherche en immunologie et en cancérologie, Université de Montréal