942 resultados para Explicit method, Mean square stability, Stochastic orthogonal Runge-Kutta, Chebyshev method
Resumo:
Introduction: Prior repeated-sprints (6) has become an interesting method to resolve the debate surrounding the principal factors that limits the oxygen uptake (V'O2) kinetics at the onset of exercise [i.e., muscle O2 delivery (5) or metabolic inertia (3)]. The aim of this study was to compare the effects of two repeated-sprints sets of 6x6s separated by different recovery duration between the sprints on V'O2 and muscular de-oxygenation [HHb] kinetics during a subsequent heavy-intensity exercise. Methods: 10 male subjects performed a 6-min constant-load cycling test (T50) at intensity corresponding to half of the difference between V'O2max and the ventilatory threshold. Then, they performed two repeated-sprints sets of 6x6s all-out separated by different recovery duration between the sprints (S1:30s and S2:3min) followed, after 7-min-recovery, by the T50 (S1T50 and S2T50, respectively). V'O2, [HHb] of the vastus lateralis (VL) and surface electromyography activity [i.e., root-mean-square (RMS) and the median frequency of the power density spectrum (MDF)] from VL and vastus medialis (VM) were recorded throughout T50. Models using a bi-exponential function for the overall T50 and a mono-exponential for the first 90s of T50 were used to define V'O2 and [HHb] kinetics respectively. Results: V'O2 mean value was higher in S1 (2.9±0.3l.min-1) than in S2 (1.2±0.3l.min-1); (p<0.001). The peripheral blood flow was increased after sprints as attested by a higher basal heart rate (HRbaseline) (S1T50: +22%; S2T50: +17%; p≤0.008). Time delay [HHb] was shorter for S1T50 and S2T50 than for T50 (-22% for both; p≤0.007) whereas the mean response time of V'O2 was accelerated only after S1 (S1T50: 32.3±2.5s; S2T50: 34.4±2.6s; T50: 35.7±5.4s; p=0.031). There were no significant differences in RMS between the three conditions (p>0.05). MDF of VM was higher during the first 3-min in S1T50 than in T50 (+6%; p≤0.05). Conclusion: The study show that V'O2 kinetics was speeded by prior repeated-sprints with a short (30s) but not a long (3min) inter-sprints-recovery even though the [HHb] kinetics was accelerated and the peripheral blood flow was enhanced after both sprints. S1, inducing a greater PCr depletion (1) and change in the pattern of the fibres recruitment (increase in MDF) compared with S2, may decrease metabolic inertia (2), stimulate the oxidative phosphorylation activation (4) and accelerate V'O2 kinetics at the beginning of the subsequent high-intensity exercise.
Resumo:
Introduction: Prior repeated-sprints (6) has become an interesting method to resolve the debate surrounding the principal factors that limits the oxygen uptake (V'O2) kinetics at the onset of exercise [i.e., muscle O2 delivery (5) or metabolic inertia (3)]. The aim of this study was to compare the effects of two repeated-sprints sets of 6x6s separated by different recovery duration between the sprints on V'O2 and muscular de-oxygenation [HHb] kinetics during a subsequent heavy-intensity exercise. Methods: 10 male subjects performed a 6-min constant-load cycling test (T50) at intensity corresponding to half of the difference between V'O2max and the ventilatory threshold. Then, they performed two repeated-sprints sets of 6x6s all-out separated by different recovery duration between the sprints (S1:30s and S2:3min) followed, after 7-min-recovery, by the T50 (S1T50 and S2T50, respectively). V'O2, [HHb] of the vastus lateralis (VL) and surface electromyography activity [i.e., root-mean-square (RMS) and the median frequency of the power density spectrum (MDF)] from VL and vastus medialis (VM) were recorded throughout T50. Models using a bi-exponential function for the overall T50 and a mono-exponential for the first 90s of T50 were used to define V'O2 and [HHb] kinetics respectively. Results: V'O2 mean value was higher in S1 (2.9±0.3l.min-1) than in S2 (1.2±0.3l.min-1); (p<0.001). The peripheral blood flow was increased after sprints as attested by a higher basal heart rate (HRbaseline) (S1T50: +22%; S2T50: +17%; p≤0.008). Time delay [HHb] was shorter for S1T50 and S2T50 than for T50 (-22% for both; p≤0.007) whereas the mean response time of V'O2 was accelerated only after S1 (S1T50: 32.3±2.5s; S2T50: 34.4±2.6s; T50: 35.7±5.4s; p=0.031). There were no significant differences in RMS between the three conditions (p>0.05). MDF of VM was higher during the first 3-min in S1T50 than in T50 (+6%; p≤0.05). Conclusion: The study show that V'O2 kinetics was speeded by prior repeated-sprints with a short (30s) but not a long (3min) inter-sprints-recovery even though the [HHb] kinetics was accelerated and the peripheral blood flow was enhanced after both sprints. S1, inducing a greater PCr depletion (1) and change in the pattern of the fibres recruitment (increase in MDF) compared with S2, may decrease metabolic inertia (2), stimulate the oxidative phosphorylation activation (4) and accelerate V'O2 kinetics at the beginning of the subsequent high-intensity exercise.
Resumo:
A statewide study was conducted to develop regression equations for estimating flood-frequency discharges for ungaged stream sites in Iowa. Thirty-eight selected basin characteristics were quantified and flood-frequency analyses were computed for 291 streamflow-gaging stations in Iowa and adjacent States. A generalized-skew-coefficient analysis was conducted to determine whether generalized skew coefficients could be improved for Iowa. Station skew coefficients were computed for 239 gaging stations in Iowa and adjacent States, and an isoline map of generalized-skew-coefficient values was developed for Iowa using variogram modeling and kriging methods. The skew map provided the lowest mean square error for the generalized-skew- coefficient analysis and was used to revise generalized skew coefficients for flood-frequency analyses for gaging stations in Iowa. Regional regression analysis, using generalized least-squares regression and data from 241 gaging stations, was used to develop equations for three hydrologic regions defined for the State. The regression equations can be used to estimate flood discharges that have recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for ungaged stream sites in Iowa. One-variable equations were developed for each of the three regions and multi-variable equations were developed for two of the regions. Two sets of equations are presented for two of the regions because one-variable equations are considered easy for users to apply and the predictive accuracies of multi-variable equations are greater. Standard error of prediction for the one-variable equations ranges from about 34 to 45 percent and for the multi-variable equations range from about 31 to 42 percent. A region-of-influence regression method was also investigated for estimating flood-frequency discharges for ungaged stream sites in Iowa. A comparison of regional and region-of-influence regression methods, based on ease of application and root mean square errors, determined the regional regression method to be the better estimation method for Iowa. Techniques for estimating flood-frequency discharges for streams in Iowa are presented for determining ( 1) regional regression estimates for ungaged sites on ungaged streams; (2) weighted estimates for gaged sites; and (3) weighted estimates for ungaged sites on gaged streams. The technique for determining regional regression estimates for ungaged sites on ungaged streams requires determining which of four possible examples applies to the location of the stream site and its basin. Illustrations for determining which example applies to an ungaged stream site and for applying both the one-variable and multi-variable regression equations are provided for the estimation techniques.
Resumo:
The objective of this work was to evaluate a generalized response function to the atmospheric CO2 concentration [f(CO2)] by the radiation use efficiency (RUE) in rice. Experimental data on RUE at different CO2 concentrations were collected from rice trials performed in several locations around the world. RUE data were then normalized, so that all RUE at current CO2 concentration were equal to 1. The response function was obtained by fitting normalized RUE versus CO2 concentration to a Morgan-Mercer-Flodin (MMF) function, and by using Marquardt's method to estimate the model coefficients. Goodness of fit was measured by the standard deviation of the estimated coefficients, the coefficient of determination (R²), and the root mean square error (RMSE). The f(CO2) describes a nonlinear sigmoidal response of RUE in rice, in function of the atmospheric CO2 concentration, which has an ecophysiological background, and, therefore, renders a robust function that can be easily coupled to rice simulation models, besides covering the range of CO2 emissions for the next generation of climate scenarios for the 21st century.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
This paper presents a Bayesian approach to the design of transmit prefiltering matrices in closed-loop schemes robust to channel estimation errors. The algorithms are derived for a multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system. Two different optimizationcriteria are analyzed: the minimization of the mean square error and the minimization of the bit error rate. In both cases, the transmitter design is based on the singular value decomposition (SVD) of the conditional mean of the channel response, given the channel estimate. The performance of the proposed algorithms is analyzed,and their relationship with existing algorithms is indicated. As withother previously proposed solutions, the minimum bit error rate algorithmconverges to the open-loop transmission scheme for very poor CSI estimates.
Resumo:
A comparative performance analysis of four geolocation methods in terms of their theoretical root mean square positioning errors is provided. Comparison is established in two different ways: strict and average. In the strict type, methods are examined for a particular geometric configuration of base stations(BSs) with respect to mobile position, which determines a givennoise profile affecting the respective time-of-arrival (TOA) or timedifference-of-arrival (TDOA) estimates. In the average type, methodsare evaluated in terms of the expected covariance matrix ofthe position error over an ensemble of random geometries, so thatcomparison is geometry independent. Exact semianalytical equationsand associated lower bounds (depending solely on the noiseprofile) are obtained for the average covariance matrix of the positionerror in terms of the so-called information matrix specific toeach geolocation method. Statistical channel models inferred fromfield trials are used to define realistic prior probabilities for therandom geometries. A final evaluation provides extensive resultsrelating the expected position error to channel model parametersand the number of base stations.
Resumo:
In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.
Resumo:
BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.
Resumo:
A new, quantitative, inference model for environmental reconstruction (transfer function), based for the first time on the simultaneous analysis of multigroup species, has been developed. Quantitative reconstructions based on palaeoecological transfer functions provide a powerful tool for addressing questions of environmental change in a wide range of environments, from oceans to mountain lakes, and over a range of timescales, from decades to millions of years. Much progress has been made in the development of inferences based on multiple proxies but usually these have been considered separately, and the different numeric reconstructions compared and reconciled post-hoc. This paper presents a new method to combine information from multiple biological groups at the reconstruction stage. The aim of the multigroup work was to test the potential of the new approach to making improved inferences of past environmental change by improving upon current reconstruction methodologies. The taxonomic groups analysed include diatoms, chironomids and chrysophyte cysts. We test the new methodology using two cold-environment training-sets, namely mountain lakes from the Pyrenees and the Alps. The use of multiple groups, as opposed to single groupings, was only found to increase the reconstruction skill slightly, as measured by the root mean square error of prediction (leave-one-out cross-validation), in the case of alkalinity, dissolved inorganic carbon and altitude (a surrogate for air-temperature), but not for pH or dissolved CO2. Reasons why the improvement was less than might have been anticipated are discussed. These can include the different life-forms, environmental responses and reaction times of the groups under study.
Resumo:
The prediction filters are well known models for signal estimation, in communications, control and many others areas. The classical method for deriving linear prediction coding (LPC) filters is often based on the minimization of a mean square error (MSE). Consequently, second order statistics are only required, but the estimation is only optimal if the residue is independent and identically distributed (iid) Gaussian. In this paper, we derive the ML estimate of the prediction filter. Relationships with robust estimation of auto-regressive (AR) processes, with blind deconvolution and with source separation based on mutual information minimization are then detailed. The algorithm, based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics. Experimental results emphasize on the interest of this approach.
Resumo:
Does Independent Component Analysis (ICA) denature EEG signals? We applied ICA to two groups of subjects (mild Alzheimer patients and control subjects). The aim of this study was to examine whether or not the ICA method can reduce both group di®erences and within-subject variability. We found that ICA diminished Leave-One- Out root mean square error (RMSE) of validation (from 0.32 to 0.28), indicative of the reduction of group di®erence. More interestingly, ICA reduced the inter-subject variability within each group (¾ = 2:54 in the ± range before ICA, ¾ = 1:56 after, Bartlett p = 0.046 after Bonfer- roni correction). Additionally, we present a method to limit the impact of human error (' 13:8%, with 75.6% inter-cleaner agreement) during ICA cleaning, and reduce human bias. These ¯ndings suggests the novel usefulness of ICA in clinical EEG in Alzheimer's disease for reduction of subject variability.
Resumo:
The CBS-4M, CBS-QB3, G2, G2(MP2), G3 and G3(MP2) model chemistry methods have been used to calculate proton and electron affinities for a set of molecular and atomic systems. Agreement with the experimental value for these electronic properties is quite good considering the uncertainty in the experimental data. A comparison among the six theories using statistical analysis (average value, standard deviation and root-mean-square) showed a better performance of CBS-QB3 to obtain these properties.
Resumo:
In this paper studies based on Multilayer Perception Artificial Neural Network and Least Square Support Vector Machine (LS-SVM) techniques are applied to determine of the concentration of Soil Organic Matter (SOM). Performances of the techniques are compared. SOM concentrations and spectral data from Mid-Infrared are used as input parameters for both techniques. Multivariate regressions were performed for a set of 1117 spectra of soil samples, with concentrations ranging from 2 to 400 g kg-1. The LS-SVM resulted in a Root Mean Square Error of Prediction of 3.26 g kg-1 that is comparable to the deviation of the Walkley-Black method (2.80 g kg-1).
Resumo:
The objective of this work is to demonstrate the efficient utilization of the Principal Components Analysis (PCA) as a method to pre-process the original multivariate data, that is rewrite in a new matrix with principal components sorted by it's accumulated variance. The Artificial Neural Network (ANN) with backpropagation algorithm is trained, using this pre-processed data set derived from the PCA method, representing 90.02% of accumulated variance of the original data, as input. The training goal is modeling Dissolved Oxygen using information of other physical and chemical parameters. The water samples used in the experiments are gathered from the Paraíba do Sul River in São Paulo State, Brazil. The smallest Mean Square Errors (MSE) is used to compare the results of the different architectures and choose the best. The utilization of this method allowed the reduction of more than 20% of the input data, which contributed directly for the shorting time and computational effort in the ANN training.