942 resultados para Explicit method, Mean square stability, Stochastic orthogonal Runge-Kutta, Chebyshev method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comparative analysis of continuous signals restoration by different kinds of approximation is performed. The software product, allowing to define optimal method of different original signals restoration by Lagrange polynomial, Kotelnikov interpolation series, linear and cubic splines, Haar wavelet and Kotelnikov-Shannon wavelet based on criterion of minimum value of mean-square deviation is proposed. Practical recommendations on the selection of approximation function for different class of signals are obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An ab initio structure prediction approach adapted to the peptide-major histocompatibility complex (MHC) class I system is presented. Based on structure comparisons of a large set of peptide-MHC class I complexes, a molecular dynamics protocol is proposed using simulated annealing (SA) cycles to sample the conformational space of the peptide in its fixed MHC environment. A set of 14 peptide-human leukocyte antigen (HLA) A0201 and 27 peptide-non-HLA A0201 complexes for which X-ray structures are available is used to test the accuracy of the prediction method. For each complex, 1000 peptide conformers are obtained from the SA sampling. A graph theory clustering algorithm based on heavy atom root-mean-square deviation (RMSD) values is applied to the sampled conformers. The clusters are ranked using cluster size, mean effective or conformational free energies, with solvation free energies computed using Generalized Born MV 2 (GB-MV2) and Poisson-Boltzmann (PB) continuum models. The final conformation is chosen as the center of the best-ranked cluster. With conformational free energies, the overall prediction success is 83% using a 1.00 Angstroms crystal RMSD criterion for main-chain atoms, and 76% using a 1.50 Angstroms RMSD criterion for heavy atoms. The prediction success is even higher for the set of 14 peptide-HLA A0201 complexes: 100% of the peptides have main-chain RMSD values < or =1.00 Angstroms and 93% of the peptides have heavy atom RMSD values < or =1.50 Angstroms. This structure prediction method can be applied to complexes of natural or modified antigenic peptides in their MHC environment with the aim to perform rational structure-based optimizations of tumor vaccines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Q(st)-F(st)) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2F(st)/(1 - F(st))G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2F(st)/(1 - F(st))] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Q(st)-F(st) comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid-solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently bench-marked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Prior repeated-sprints (6) has become an interesting method to resolve the debate surrounding the principal factors that limits the oxygen uptake (V'O2) kinetics at the onset of exercise [i.e., muscle O2 delivery (5) or metabolic inertia (3)]. The aim of this study was to compare the effects of two repeated-sprints sets of 6x6s separated by different recovery duration between the sprints on V'O2 and muscular de-oxygenation [HHb] kinetics during a subsequent heavy-intensity exercise. Methods: 10 male subjects performed a 6-min constant-load cycling test (T50) at intensity corresponding to half of the difference between V'O2max and the ventilatory threshold. Then, they performed two repeated-sprints sets of 6x6s all-out separated by different recovery duration between the sprints (S1:30s and S2:3min) followed, after 7-min-recovery, by the T50 (S1T50 and S2T50, respectively). V'O2, [HHb] of the vastus lateralis (VL) and surface electromyography activity [i.e., root-mean-square (RMS) and the median frequency of the power density spectrum (MDF)] from VL and vastus medialis (VM) were recorded throughout T50. Models using a bi-exponential function for the overall T50 and a mono-exponential for the first 90s of T50 were used to define V'O2 and [HHb] kinetics respectively. Results: V'O2 mean value was higher in S1 (2.9±0.3l.min-1) than in S2 (1.2±0.3l.min-1); (p<0.001). The peripheral blood flow was increased after sprints as attested by a higher basal heart rate (HRbaseline) (S1T50: +22%; S2T50: +17%; p≤0.008). Time delay [HHb] was shorter for S1T50 and S2T50 than for T50 (-22% for both; p≤0.007) whereas the mean response time of V'O2 was accelerated only after S1 (S1T50: 32.3±2.5s; S2T50: 34.4±2.6s; T50: 35.7±5.4s; p=0.031). There were no significant differences in RMS between the three conditions (p>0.05). MDF of VM was higher during the first 3-min in S1T50 than in T50 (+6%; p≤0.05). Conclusion: The study show that V'O2 kinetics was speeded by prior repeated-sprints with a short (30s) but not a long (3min) inter-sprints-recovery even though the [HHb] kinetics was accelerated and the peripheral blood flow was enhanced after both sprints. S1, inducing a greater PCr depletion (1) and change in the pattern of the fibres recruitment (increase in MDF) compared with S2, may decrease metabolic inertia (2), stimulate the oxidative phosphorylation activation (4) and accelerate V'O2 kinetics at the beginning of the subsequent high-intensity exercise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in cylindrical coordinates. An important application of this method is the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh consisting of three concentric domains representing the borehole fluid in the center, the borehole casing and the surrounding porous formation. The spatial discretization is based on a Chebyshev expansion in the radial direction, Fourier expansions in the other directions, and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method based on the method of characteristics is used to match the boundary conditions at the fluid/porous-solid and porous-solid/porous-solid interfaces. The viability and accuracy of the proposed method has been tested and verified in 2D polar coordinates through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. The proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is handled adequately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Prior repeated-sprints (6) has become an interesting method to resolve the debate surrounding the principal factors that limits the oxygen uptake (V'O2) kinetics at the onset of exercise [i.e., muscle O2 delivery (5) or metabolic inertia (3)]. The aim of this study was to compare the effects of two repeated-sprints sets of 6x6s separated by different recovery duration between the sprints on V'O2 and muscular de-oxygenation [HHb] kinetics during a subsequent heavy-intensity exercise. Methods: 10 male subjects performed a 6-min constant-load cycling test (T50) at intensity corresponding to half of the difference between V'O2max and the ventilatory threshold. Then, they performed two repeated-sprints sets of 6x6s all-out separated by different recovery duration between the sprints (S1:30s and S2:3min) followed, after 7-min-recovery, by the T50 (S1T50 and S2T50, respectively). V'O2, [HHb] of the vastus lateralis (VL) and surface electromyography activity [i.e., root-mean-square (RMS) and the median frequency of the power density spectrum (MDF)] from VL and vastus medialis (VM) were recorded throughout T50. Models using a bi-exponential function for the overall T50 and a mono-exponential for the first 90s of T50 were used to define V'O2 and [HHb] kinetics respectively. Results: V'O2 mean value was higher in S1 (2.9±0.3l.min-1) than in S2 (1.2±0.3l.min-1); (p<0.001). The peripheral blood flow was increased after sprints as attested by a higher basal heart rate (HRbaseline) (S1T50: +22%; S2T50: +17%; p≤0.008). Time delay [HHb] was shorter for S1T50 and S2T50 than for T50 (-22% for both; p≤0.007) whereas the mean response time of V'O2 was accelerated only after S1 (S1T50: 32.3±2.5s; S2T50: 34.4±2.6s; T50: 35.7±5.4s; p=0.031). There were no significant differences in RMS between the three conditions (p>0.05). MDF of VM was higher during the first 3-min in S1T50 than in T50 (+6%; p≤0.05). Conclusion: The study show that V'O2 kinetics was speeded by prior repeated-sprints with a short (30s) but not a long (3min) inter-sprints-recovery even though the [HHb] kinetics was accelerated and the peripheral blood flow was enhanced after both sprints. S1, inducing a greater PCr depletion (1) and change in the pattern of the fibres recruitment (increase in MDF) compared with S2, may decrease metabolic inertia (2), stimulate the oxidative phosphorylation activation (4) and accelerate V'O2 kinetics at the beginning of the subsequent high-intensity exercise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The estimation of non available soil variables through the knowledge of other related measured variables can be achieved through pedotransfer functions (PTF) mainly saving time and reducing cost. Great differences among soils, however, can yield non desirable results when applying this method. This study discusses the application of developed PTFs by several authors using a variety of soils of different characteristics, to evaluate soil water contents of two Brazilian lowland soils. Comparisons are made between PTF evaluated data and field measured data, using statistical and geostatistical tools, like mean error, root mean square error, semivariogram, cross-validation, and regression coefficient. The eight tested PTFs to evaluate gravimetric soil water contents (Ug) at the tensions of 33 kPa and 1,500 kPa presented a tendency to overestimate Ug 33 kPa and underestimate Ug1,500 kPa. The PTFs were ranked according to their performance and also with respect to their potential in describing the structure of the spatial variability of the set of measured values. Although none of the PTFs have changed the distribution pattern of the data, all resulted in mean and variance statistically different from those observed for all measured values. The PTFs that presented the best predictive values of Ug33 kPa and Ug1,500 kPa were not the same that had the best performance to reproduce the structure of spatial variability of these variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe the results of a simulation study performed to elucidate the robustness of the Lindstrom and Bates (1990) approximation method under non-normality of the residuals, under different situations. Concerning the fixed effects, the observed coverage probabilities and the true bias and mean square error values, show that some aspects of this inferential approach are not completely reliable. When the true distribution of the residuals is asymmetrical, the true coverage is markedly lower than the nominal one. The best results are obtained for the skew normal distribution, and not for the normal distribution. On the other hand, the results are partially reversed concerning the random effects. Soybean genotypes data are used to illustrate the methods and to motivate the simulation scenarios

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focused on four alternatives of analysis of experiments in square lattice as far as the estimation of variance components and some genetic parameters are concerned: 1) intra-block analysis with adjusted treatment and blocks within unadjusted repetitions; 2) lattice analysis as complete randomized blocks; 3) intrablock analysis with unadjusted treatment and blocks within adjusted repetitions; 4) lattice analysis as complete randomized blocks, by utilizing the adjusted means of treatments, obtained from the analysis with recovery of interblock information, having as mean square of the error the mean effective variance of this same analysis with recovery of inter-block information. For the four alternatives of analysis, the estimators and estimates were obtained for the variance components and heritability coefficients. The classification of material was also studied. The present study suggests that for each experiment and depending of the objectives of the analysis, one should observe which alternative of analysis is preferable, mainly in cases where a negative estimate is obtained for the variance component due to effects of blocks within adjusted repetitions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.