991 resultados para linear transformation


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper deals with non-linear transformations for improving the performance of an entropy-based voice activity detector (VAD). The idea to use a non-linear transformation has already been applied in the field of speech linear prediction, or linear predictive coding (LPC), based on source separation techniques, where a score function is added to classical equations in order to take into account the true distribution of the signal. We explore the possibility of estimating the entropy of frames after calculating its score function, instead of using original frames. We observe that if the signal is clean, the estimated entropy is essentially the same; if the signal is noisy, however, the frames transformed using the score function may give entropy that is different in voiced frames as compared to nonvoiced ones. Experimental evidence is given to show that this fact enables voice activity detection under high noise, where the simple entropy method fails.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we explore the use of non-linear transformations in order to improve the performance of an entropy based voice activity detector (VAD). The idea of using a non-linear transformation comes from some previous work done in speech linear prediction (LPC) field based in source separation techniques, where the score function was added into the classical equations in order to take into account the real distribution of the signal. We explore the possibility of estimating the entropy of frames after calculating its score function, instead of using original frames. We observe that if signal is clean, estimated entropy is essentially the same; but if signal is noisy transformed frames (with score function) are able to give different entropy if the frame is voiced against unvoiced ones. Experimental results show that this fact permits to detect voice activity under high noise, where simple entropy method fails.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Pós-graduação em Educação Matemática - IGCE

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A series of motion compensation algorithms is run on the challenge data including methods that optimize only a linear transformation, or a non-linear transformation, or both – first a linear and then a non-linear transformation. Methods that optimize a linear transformation run an initial segmentation of the area of interest around the left myocardium by means of an independent component analysis (ICA) (ICA-*). Methods that optimize non-linear transformations may run directly on the full images, or after linear registration. Non-linear motion compensation approaches applied include one method that only registers pairs of images in temporal succession (SERIAL), one method that registers all image to one common reference (AllToOne), one method that was designed to exploit quasi-periodicity in free breathing acquired image data and was adapted to also be usable to image data acquired with initial breath-hold (QUASI-P), a method that uses ICA to identify the motion and eliminate it (ICA-SP), and a method that relies on the estimation of a pseudo ground truth (PG) to guide the motion compensation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Neuroimaging studies of cortical activation during image transformation tasks have shown that mental rotation may rely on similar brain regions as those underlying visual perceptual mechanisms. The V5 complex, which is specialised for visual motion, is one region that has been implicated. We used functional magnetic resonance imaging (fMRI) to investigate rotational and linear transformation of stimuli. Areas of significant brain activation were identified for each of the primary mental transformation tasks in contrast to its own perceptual reference task which was cognitively matched in all respects except for the variable of interest. Analysis of group data for perception of rotational and linear motion showed activation in areas corresponding to V5 as defined in earlier studies. Both rotational and linear mental transformations activated Brodman Area (BA) 19 but did not activate V5. An area within the inferior temporal gyrus, representing an inferior satellite area of V5, was activated by both the rotational perception and rotational transformation tasks, but showed no activation in response to linear motion perception or transformation. The findings demonstrate the extent to which neural substrates for image transformation and perception overlap and are distinct as well as revealing functional specialisation within perception and transformation processing systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Attention deficit hyperactivity disorder (ADHD) is a clinically significant disorder in adulthood, but current diagnostic criteria and instruments do not seem to adequately capture the complexity of the disorder in this developmental phase. Accordingly, there are limited data on the proportion of adults affected by the disorder, specially in developing countries. Method: We assessed a representative household sample of the Brazilian population for ADHD with the Adult ADHD Self-report Scale (ASRS) Screener, and evaluated the instrument according to the Rasch model of item response theory. Results: The sample was comprised by 3007 individuals, and the overal prevalence of positive screeners for ADHD was 5.8% [95% confidence interval (CI), 4.8-7.0]. Rasch analyses revealed the misfitt of the overall sample to expectations of the model. The evaluation of the sample stratified by age revealed that data for adolescents showed a signficant fittnes to the model expectations, while items completed by adults were not adequated. Conclusions: The lack of fitness to the model for adult respondents challenges the possibility of a linear transformation of the ordinal data into interval measures and the utilization of parametric analyses of data. This result suggests that diagnostic criteria and instruments for adult ADHD must take into account a developmental perspective. Moreover, it calls for further evaluation of currently employed research methods in light of modern theories of psychometrics. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cryo-electron microscopy of vitreous sections (CEMOVIS) has recently been shown to provide images of biological specimens with unprecedented quality and resolution. Cutting the sections remains however the major difficulty. Here, we examine the parameters influencing the quality of the sections and analyse the resulting artefacts. They are in particular: knife marks, compression, crevasses, and chatter. We propose a model taking into account the interplay between viscous flow and fracture. We confirm that crevasses are formed on only one side of the section, and define conditions by which they can be avoided. Chatter is an effect of irregular compression due to friction of the section of the knife edge and conditions to prevent this are also explored. In absence of crevasses and chatter, the bulk of the section is compressed approximately homogeneously. Within this approximation, it is possible to correct for compression by a simple linear transformation for the bulk of the section. A research program is proposed to test and refine our understanding of the sectioning process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L’introduction aux concepts unificateurs dans l’enseignement des mathématiques privilégie typiquement l’approche axiomatique. Il n’est pas surprenant de constater qu’une telle approche tend à une algorithmisation des tâches pour augmenter l’efficacité de leur résolution et favoriser la transparence du nouveau concept enseigné (Chevallard, 1991). Cette réponse classique fait néanmoins oublier le rôle unificateur du concept et n’encourage pas à l’utilisation de sa puissance. Afin d’améliorer l’apprentissage d’un concept unificateur, ce travail de thèse étudie la pertinence d’une séquence didactique dans la formation d’ingénieurs centrée sur un concept unificateur de l’algèbre linéaire: la transformation linéaire (TL). La notion d’unification et la question du sens de la linéarité sont abordées à travers l’acquisition de compétences en résolution de problèmes. La séquence des problèmes à résoudre a pour objet le processus de construction d’un concept abstrait (la TL) sur un domaine déjà mathématisé, avec l’intention de dégager l’aspect unificateur de la notion formelle (Astolfi y Drouin, 1992). À partir de résultats de travaux en didactique des sciences et des mathématiques (Dupin 1995; Sfard 1991), nous élaborons des situations didactiques sur la base d’éléments de modélisation, en cherchant à articuler deux façons de concevoir l’objet (« procédurale » et « structurale ») de façon à trouver une stratégie de résolution plus sûre, plus économique et réutilisable. En particulier, nous avons cherché à situer la notion dans différents domaines mathématiques où elle est applicable : arithmétique, géométrique, algébrique et analytique. La séquence vise à développer des liens entre différents cadres mathématiques, et entre différentes représentations de la TL dans les différents registres mathématiques, en s’inspirant notamment dans cette démarche du développement historique de la notion. De plus, la séquence didactique vise à maintenir un équilibre entre le côté applicable des tâches à la pratique professionnelle visée, et le côté théorique propice à la structuration des concepts. L’étude a été conduite avec des étudiants chiliens en formation au génie, dans le premier cours d’algèbre linéaire. Nous avons mené une analyse a priori détaillée afin de renforcer la robustesse de la séquence et de préparer à l’analyse des données. Par l’analyse des réponses au questionnaire d’entrée, des productions des équipes et des commentaires reçus en entrevus, nous avons pu identifier les compétences mathématiques et les niveaux d’explicitation (Caron, 2004) mis à contribution dans l’utilisation de la TL. Les résultats obtenus montrent l’émergence du rôle unificateur de la TL, même chez ceux dont les habitudes en résolution de problèmes mathématiques sont marquées par une orientation procédurale, tant dans l’apprentissage que dans l’enseignement. La séquence didactique a montré son efficacité pour la construction progressive chez les étudiants de la notion de transformation linéaire (TL), avec le sens et les propriétés qui lui sont propres : la TL apparaît ainsi comme un moyen économique de résoudre des problèmes extérieurs à l’algèbre linéaire, ce qui permet aux étudiants d’en abstraire les propriétés sous-jacentes. Par ailleurs, nous avons pu observer que certains concepts enseignés auparavant peuvent agir comme obstacles à l’unification visée. Cela peut ramener les étudiants à leur point de départ, et le rôle de la TL se résume dans ces conditions à révéler des connaissances partielles, plutôt qu’à guider la résolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L'analyse en composantes indépendantes (ACI) est une méthode d'analyse statistique qui consiste à exprimer les données observées (mélanges de sources) en une transformation linéaire de variables latentes (sources) supposées non gaussiennes et mutuellement indépendantes. Dans certaines applications, on suppose que les mélanges de sources peuvent être groupés de façon à ce que ceux appartenant au même groupe soient fonction des mêmes sources. Ceci implique que les coefficients de chacune des colonnes de la matrice de mélange peuvent être regroupés selon ces mêmes groupes et que tous les coefficients de certains de ces groupes soient nuls. En d'autres mots, on suppose que la matrice de mélange est éparse par groupe. Cette hypothèse facilite l'interprétation et améliore la précision du modèle d'ACI. Dans cette optique, nous proposons de résoudre le problème d'ACI avec une matrice de mélange éparse par groupe à l'aide d'une méthode basée sur le LASSO par groupe adaptatif, lequel pénalise la norme 1 des groupes de coefficients avec des poids adaptatifs. Dans ce mémoire, nous soulignons l'utilité de notre méthode lors d'applications en imagerie cérébrale, plus précisément en imagerie par résonance magnétique. Lors de simulations, nous illustrons par un exemple l'efficacité de notre méthode à réduire vers zéro les groupes de coefficients non-significatifs au sein de la matrice de mélange. Nous montrons aussi que la précision de la méthode proposée est supérieure à celle de l'estimateur du maximum de la vraisemblance pénalisée par le LASSO adaptatif dans le cas où la matrice de mélange est éparse par groupe.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The technique of constructing a transformation, or regrading, of a discrete data set such that the histogram of the transformed data matches a given reference histogram is commonly known as histogram modification. The technique is widely used for image enhancement and normalization. A method which has been previously derived for producing such a regrading is shown to be “best” in the sense that it minimizes the error between the cumulative histogram of the transformed data and that of the given reference function, over all single-valued, monotone, discrete transformations of the data. Techniques for smoothed regrading, which provide a means of balancing the error in matching a given reference histogram against the information lost with respect to a linear transformation are also examined. The smoothed regradings are shown to optimize certain cost functionals. Numerical algorithms for generating the smoothed regradings, which are simple and efficient to implement, are described, and practical applications to the processing of LANDSAT image data are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we construct common-factor portfolios using a novel linear transformation of standard factor models extracted from large data sets of asset returns. The simple transformation proposed here keeps the basic properties of the usual factor transformations, although some new interesting properties are further attached to them. Some theoretical advantages are shown to be present. Also, their practical importance is confirmed in two applications: the performance of common-factor portfolios are shown to be superior to that of asset returns and factors commonly employed in the finance literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using a canonical formulation, the stability of the rotational motion of artificial satellites is analyzed considering perturbations due to the gravity gradient torque. Here Andoyer's variables are used to describe the rotational motion. One of the approaches that allow the analysis of the stability of Hamiltonian systems needs the reduction of the Hamiltonian to a normal form. Firstly equilibrium points are found. Using generalized coordinates, the Hamiltonian is expanded in the neighborhood of the linearly stable equilibrium points. In a next step a canonical linear transformation is used to diagonalize the matrix associated to the linear part of the system. The quadratic part of the Hamiltonian is normalized. Based in a Lie-Hori algorithm a semi-analytic process for normalization is applied and the Hamiltonian is normalized up to the fourth order. Once the Hamiltonian is normalized up to order four, the analysis of stability of the equilibrium point is performed using the theorem of Kovalev and Savichenko. This semi-analytical approach was applied considering some data sets of hypothetical satellites. For the considered satellites it was observed few cases of stable motion. This work contributes for space missions where the maintenance of spacecraft attitude stability is required.