988 resultados para MEAN VECTOR
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
Asynchronous level crossing sampling analog-to-digital converters (ADCs) are known to be more energy efficient and produce fewer samples than their equidistantly sampling counterparts. However, as the required threshold voltage is lowered, the number of samples and, in turn, the data rate and the energy consumed by the overall system increases. In this paper, we present a cubic Hermitian vector-based technique for online compression of asynchronously sampled electrocardiogram signals. The proposed method is computationally efficient data compression. The algorithm has complexity O(n), thus well suited for asynchronous ADCs. Our algorithm requires no data buffering, maintaining the energy advantage of asynchronous ADCs. The proposed method of compression has a compression ratio of up to 90% with achievable percentage root-mean-square difference ratios as a low as 0.97. The algorithm preserves the superior feature-to-feature timing accuracy of asynchronously sampled signals. These advantages are achieved in a computationally efficient manner since algorithm boundary parameters for the signals are extracted a priori.
Resumo:
Purpose: To define a range of normality for the vectorial parameters Ocular Residual Astigmatism (ORA) and topography disparity (TD) and to evaluate their relationship with visual, refractive, anterior and posterior corneal curvature, pachymetric and corneal volume data in normal healthy eyes. Methods: This study comprised a total of 101 consecutive normal healthy eyes of 101 patients ranging in age from 15 to 64 years old. In all cases, a complete corneal analysis was performed using a Scheimpflug photography-based topography system (Pentacam system Oculus Optikgeräte GmbH). Anterior corneal topographic data were imported from the Pentacam system to the iASSORT software (ASSORT Pty. Ltd.), which allowed the calculation of the ocular residual astigmatism (ORA) and topography disparity (TD). Linear regression analysis was used for obtaining a linear expression relating ORA and posterior corneal astigmatism (PCA). Results: Mean magnitude of ORA was 0.79 D (SD: 0.43), with a normality range from 0 to 1.63 D. 90 eyes (89.1%) showed against-the-rule ORA. A weak although statistically significant correlation was found between the magnitudes of posterior corneal astigmatism and ORA (r = 0.34, p < 0.01). Regression analysis showed the presence of a linear relationship between these two variables, although with a very limited predictability (R2: 0.08). Mean magnitude of TD was 0.89 D (SD: 0.50), with a normality range from 0 to 1.87 D. Conclusion: The magnitude of the vector parameters ORA and TD is lower than 1.9 D in the healthy human eye.
Resumo:
Purpose. We aimed to characterize the distribution of the vector parameters ocular residual astigmatism (ORA) and topography disparity (TD) in a sample of clinical and subclinical keratoconus eyes, and to evaluate their diagnostic value to discriminate between these conditions and healthy corneas. Methods. This study comprised a total of 43 keratoconic eyes (27 patients, 17–73 years) (keratoconus group), 11 subclinical keratoconus eyes (eight patients, 11–54 years) (subclinical keratoconus group) and 101 healthy eyes (101 patients, 15–64 years) (control group). In all cases, a complete corneal analysis was performed using a Scheimpflug photography-based topography system. Anterior corneal topographic data was imported from it to the iASSORT software (ASSORT Pty. Ltd), which allowed the calculation of ORA and TD. Results. Mean magnitude of the ORA was 3.23 ± 2.38, 1.16 ± 0.50 and 0.79 ± 0.43 D in the keratoconus, subclinical keratoconus and control groups, respectively (p < 0.001). Mean magnitude of the TD was 9.04 ± 8.08, 2.69 ± 2.42 and 0.89 ± 0.50 D in the keratoconus, subclinical keratoconus and control groups, respectively (p < 0.001). Good diagnostic performance of ORA (cutoff point: 1.21 D, sensitivity 83.7 %, specificity 87.1 %) and TD (cutoff point: 1.64 D, sensitivity 93.3 %, specificity 92.1 %) was found for the detection of keratoconus. The diagnostic ability of these parameters for the detection of subclinical keratoconus was more limited (ORA: cutoff 1.17 D, sensitivity 60.0 %, specificity 84.2 %; TD: cutoff 1.29 D, sensitivity 80.0 %, specificity 80.2 %). Conclusion. The vector parameters ORA and TD are able to discriminate with good levels of precision between keratoconus and healthy corneas. For the detection of subclinical keratoconus, only TD seems to be valid.
Resumo:
Background: The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. Results: We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. Conclusion: The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.
Resumo:
This report outlines the derivation and application of a non-zero mean, polynomial-exponential covariance function based Gaussian process which forms the prior wind field model used in 'autonomous' disambiguation. It is principally used since the non-zero mean permits the computation of realistic local wind vector prior probabilities which are required when applying the scaled-likelihood trick, as the marginals of the full wind field prior. As the full prior is multi-variate normal, these marginals are very simple to compute.
Resumo:
We derive a mean field algorithm for binary classification with Gaussian processes which is based on the TAP approach originally proposed in Statistical Physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler 'naive' mean field theory and support vector machines (SVM) as limiting cases. For both mean field algorithms and support vectors machines, simulation results for three small benchmark data sets are presented. They show 1. that one may get state of the art performance by using the leave-one-out estimator for model selection and 2. the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The latter result is a taken as a strong support for the internal consistency of the mean field approach.
Resumo:
In this chapter, we elaborate on the well-known relationship between Gaussian processes (GP) and Support Vector Machines (SVM). Secondly, we present approximate solutions for two computational problems arising in GP and SVM. The first one is the calculation of the posterior mean for GP classifiers using a `naive' mean field approach. The second one is a leave-one-out estimator for the generalization error of SVM based on a linear response method. Simulation results on a benchmark dataset show similar performances for the GP mean field algorithm and the SVM algorithm. The approximate leave-one-out estimator is found to be in very good agreement with the exact leave-one-out error.
Resumo:
Natural language understanding (NLU) aims to map sentences to their semantic mean representations. Statistical approaches to NLU normally require fully-annotated training data where each sentence is paired with its word-level semantic annotations. In this paper, we propose a novel learning framework which trains the Hidden Markov Support Vector Machines (HM-SVMs) without the use of expensive fully-annotated data. In particular, our learning approach takes as input a training set of sentences labeled with abstract semantic annotations encoding underlying embedded structural relations and automatically induces derivation rules that map sentences to their semantic meaning representations. The proposed approach has been tested on the DARPA Communicator Data and achieved 93.18% in F-measure, which outperforms the previously proposed approaches of training the hidden vector state model or conditional random fields from unaligned data, with a relative error reduction rate of 43.3% and 10.6% being achieved.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
Background - Clostridium difficile is a bacterial healthcare-associated infection that may be transferred by houseflies (Musca domestica) due to their close ecological association with humans and cosmopolitan nature. Aim - To determine the ability of M. domestica to transfer C. difficile both mechanically and following ingestion. Methods - M. domestica were exposed to independent suspensions of vegetative cells and spores of C. difficile, then sampled on to selective agar plates immediately postexposure and at 1-h intervals to assess the mechanical transfer of C. difficile. Fly excreta was cultured and alimentary canals were dissected to determine internalization of cells and spores. Findings - M. domestica exposed to vegetative cell suspensions and spore suspensions of C. difficile were able to transfer the bacteria mechanically for up to 4 h upon subsequent contact with surfaces. The greatest numbers of colony-forming units (CFUs) per fly were transferred immediately following exposure (mean CFUs 123.8 +/− 66.9 for vegetative cell suspension and 288.2 +/− 83.2 for spore suspension). After 1 h, this had reduced (21.2 +/− 11.4 for vegetative cell suspension and 19.9 +/− 9 for spores). Mean C. difficile CFUs isolated from the M. domestica alimentary canal was 35 +/− 6.5, and mean C. difficile CFUs per faecal spot was 1.04 +/− 0.58. C. difficile could be recovered from fly excreta for up to 96 h. Conclusion - This study describes the potential for M. domestica to contribute to environmental persistence and spread of C. difficile in hospitals, highlighting flies as realistic vectors of this micro-organism in clinical areas.