956 resultados para estimation error


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The clinical demand for a device to monitor Blood Pressure (BP) in ambulatory scenarios with minimal use of inflation cuffs is increasing. Based on the so-called Pulse Wave Velocity (PWV) principle, this paper introduces and evaluates a novel concept of BP monitor that can be fully integrated within a chest sensor. After a preliminary calibration, the sensor provides non-occlusive beat-by-beat estimations of Mean Arterial Pressure (MAP) by measuring the Pulse Transit Time (PTT) of arterial pressure pulses travelling from the ascending aorta towards the subcutaneous vasculature of the chest. In a cohort of 15 healthy male subjects, a total of 462 simultaneous readings consisting of reference MAP and chest PTT were acquired. Each subject was recorded at three different days: D, D+3 and D+14. Overall, the implemented protocol induced MAP values to range from 80 ± 6 mmHg in baseline, to 107 ± 9 mmHg during isometric handgrip maneuvers. Agreement between reference and chest-sensor MAP values was tested by using intraclass correlation coefficient (ICC = 0.78) and Bland-Altman analysis (mean error = 0.7 mmHg, standard deviation = 5.1 mmHg). The cumulative percentage of MAP values provided by the chest sensor falling within a range of ±5 mmHg compared to reference MAP readings was of 70%, within ±10 mmHg was of 91%, and within ±15mmHg was of 98%. These results point at the fact that the chest sensor complies with the British Hypertension Society (BHS) requirements of Grade A BP monitors, when applied to MAP readings. Grade A performance was maintained even two weeks after having performed the initial subject-dependent calibration. In conclusion, this paper introduces a sensor and a calibration strategy to perform MAP measurements at the chest. The encouraging performance of the presented technique paves the way towards an ambulatory-compliant, continuous and non-occlusive BP monitoring system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of experimental methods have been reported for estimating the number of genes in a genome, or the closely related coding density of a genome, defined as the fraction of base pairs in codons. Recently, DNA sequence data representative of the genome as a whole have become available for several organisms, making the problem of estimating coding density amenable to sequence analytic methods. Estimates of coding density for a single genome vary widely, so that methods with characterized error bounds have become increasingly desirable. We present a method to estimate the protein coding density in a corpus of DNA sequence data, in which a ‘coding statistic’ is calculated for a large number of windows of the sequence under study, and the distribution of the statistic is decomposed into two normal distributions, assumed to be the distributions of the coding statistic in the coding and noncoding fractions of the sequence windows. The accuracy of the method is evaluated using known data and application is made to the yeast chromosome III sequence and to C.elegans cosmid sequences. It can also be applied to fragmentary data, for example a collection of short sequences determined in the course of STS mapping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The objective of this study is to investigate the feasibility of detecting and quantifying 3D cerebrovascular wall motion from a single 3D rotational x-ray angiography (3DRA) acquisition within a clinically acceptable time and computing from the estimated motion field for the further biomechanical modeling of the cerebrovascular wall. Methods: The whole motion cycle of the cerebral vasculature is modeled using a 4D B-spline transformation, which is estimated from a 4D to 2D + t image registration framework. The registration is performed by optimizing a single similarity metric between the entire 2D + t measured projection sequence and the corresponding forward projections of the deformed volume at their exact time instants. The joint use of two acceleration strategies, together with their implementation on graphics processing units, is also proposed so as to reach computation times close to clinical requirements. For further characterizing vessel wall properties, an approximation of the wall thickness changes is obtained through a strain calculation. Results: Evaluation on in silico and in vitro pulsating phantom aneurysms demonstrated an accurate estimation of wall motion curves. In general, the error was below 10% of the maximum pulsation, even in the situation when substantial inhomogeneous intensity pattern was present. Experiments on in vivo data provided realistic aneurysm and vessel wall motion estimates, whereas in regions where motion was neither visible nor anatomically possible, no motion was detected. The use of the acceleration strategies enabled completing the estimation process for one entire cycle in 5-10 min without degrading the overall performance. The strain map extracted from our motion estimation provided a realistic deformation measure of the vessel wall. Conclusions: The authors' technique has demonstrated that it can provide accurate and robust 4D estimates of cerebrovascular wall motion within a clinically acceptable time, although it has to be applied to a larger patient population prior to possible wide application to routine endovascular procedures. In particular, for the first time, this feasibility study has shown that in vivo cerebrovascular motion can be obtained intraprocedurally from a 3DRA acquisition. Results have also shown the potential of performing strain analysis using this imaging modality, thus making possible for the future modeling of biomechanical properties of the vascular wall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A national survey designed for estimating a specific population quantity is sometimes used for estimation of this quantity also for a small area, such as a province. Budget constraints do not allow a greater sample size for the small area, and so other means of improving estimation have to be devised. We investigate such methods and assess them by a Monte Carlo study. We explore how a complementary survey can be exploited in small area estimation. We use the context of the Spanish Labour Force Survey (EPA) and the Barometer in Spain for our study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates that, unlike what the conventional wisdom says, measurement error biases in panel data estimation of convergence using OLS with fixed effects are huge, not trivial. It does so by way of the "skipping estimation"': taking data from every m years of the sample (where m is an integer greater than or equal to 2), as opposed to every single year. It is shown that the estimated speed of convergence from the OLS with fixed effects is biased upwards by as much as 7 to 15%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We evaluated the accuracy of skinfold thicknesses, BMI and waist circumference for the prediction of percentage body fat (PBF) in a representative sample of 372 Swiss children aged 6-13 years. PBF was measured using dual-energy X-ray absorptiometry. On the basis of a preliminary bootstrap selection of predictors, seven regression models were evaluated. All models included sex, age and pubertal stage plus one of the following predictors: (1) log-transformed triceps skinfold (logTSF); (2) logTSF and waist circumference; (3) log-transformed sum of triceps and subscapular skinfolds (logSF2); (4) log-transformed sum of triceps, biceps, subscapular and supra-iliac skinfolds (logSF4); (5) BMI; (6) waist circumference; (7) BMI and waist circumference. The adjusted determination coefficient (R² adj) and the root mean squared error (RMSE; kg) were calculated for each model. LogSF4 (R² adj 0.85; RMSE 2.35) and logSF2 (R² adj 0.82; RMSE 2.54) were similarly accurate at predicting PBF and superior to logTSF (R² adj 0.75; RMSE 3.02), logTSF combined with waist circumference (R² adj 0.78; RMSE 2.85), BMI (R² adj 0.62; RMSE 3.73), waist circumference (R² adj 0.58; RMSE 3.89), and BMI combined with waist circumference (R² adj 0.63; RMSE 3.66) (P < 0.001 for all values of R² adj). The finding that logSF4 was only modestly superior to logSF2 and that logTSF was better than BMI and waist circumference at predicting PBF has important implications for paediatric epidemiological studies aimed at disentangling the effect of body fat on health outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presented is an accurate swimming velocity estimation method using an inertial measurement unit (IMU) by employing a simple biomechanical constraint of motion along with Gaussian process regression to deal with sensor inherent errors. Experimental validation shows a velocity RMS error of 9.0 cm/s and high linear correlation when compared with a commercial tethered reference system. The results confirm the practicality of the presented method to estimate swimming velocity using a single low-cost, body-worn IMU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SASbur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, deltaGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC5o without reparametrization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical disector is a method of choice for estimating unbiased neuron numbers; nevertheless, calibration is needed to evaluate each counting method. The validity of this method can be assessed by comparing the estimated cell number with the true number determined by a direct counting method in serial sections. We reconstructed a 1/5 of rat lumbar dorsal root ganglia taken from two experimental conditions. From each ganglion, images of 200 adjacent semi-thin sections were used to reconstruct a volumetric dataset (stack of voxels). On these stacks the number of sensory neurons was estimated and counted respectively by physical disector and direct counting methods. Also, using the coordinates of nuclei from the direct counting, we simulate, by a Matlab program, disector pairs separated by increasing distances in a ganglion model. The comparison between the results of these approaches clearly demonstrates that the physical disector method provides a valid and reliable estimate of the number of sensory neurons only when the distance between the consecutive disector pairs is 60 microm or smaller. In these conditions the size of error between the results of physical disector and direct counting does not exceed 6%. In contrast when the distance between two pairs is larger than 60 microm (70-200 microm) the size of error increases rapidly to 27%. We conclude that the physical dissector method provides a reliable estimate of the number of rat sensory neurons only when the separating distance between the consecutive dissector pairs is no larger than 60 microm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image registration has been proposed as an automatic method for recovering cardiac displacement fields from Tagged Magnetic Resonance Imaging (tMRI) sequences. Initially performed as a set of pairwise registrations, these techniques have evolved to the use of 3D+t deformation models, requiring metrics of joint image alignment (JA). However, only linear combinations of cost functions defined with respect to the first frame have been used. In this paper, we have applied k-Nearest Neighbors Graphs (kNNG) estimators of the -entropy (H ) to measure the joint similarity between frames, and to combine the information provided by different cardiac views in an unified metric. Experiments performed on six subjects showed a significantly higher accuracy (p < 0.05) with respect to a standard pairwise alignment (PA) approach in terms of mean positional error and variance with respect to manually placed landmarks. The developed method was used to study strains in patients with myocardial infarction, showing a consistency between strain, infarction location, and coronary occlusion. This paper also presentsan interesting clinical application of graph-based metric estimators, showing their value for solving practical problems found in medical imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To use measurement by cycling power meters (Pmes) to evaluate the accuracy of commonly used models for estimating uphill cycling power (Pest). Experiments were designed to explore the influence of wind speed and steepness of climb on accuracy of Pest. The authors hypothesized that the random error in Pest would be largely influenced by the windy conditions, the bias would be diminished in steeper climbs, and windy conditions would induce larger bias in Pest. METHODS: Sixteen well-trained cyclists performed 15 uphill-cycling trials (range: length 1.3-6.3 km, slope 4.4-10.7%) in a random order. Trials included different riding position in a group (lead or follow) and different wind speeds. Pmes was quantified using a power meter, and Pest was calculated with a methodology used by journalists reporting on the Tour de France. RESULTS: Overall, the difference between Pmes and Pest was -0.95% (95%CI: -10.4%, +8.5%) for all trials and 0.24% (-6.1%, +6.6%) in conditions without wind (<2 m/s). The relationship between percent slope and the error between Pest and Pmes were considered trivial. CONCLUSIONS: Aerodynamic drag (affected by wind velocity and orientation, frontal area, drafting, and speed) is the most confounding factor. The mean estimated values are close to the power-output values measured by power meters, but the random error is between ±6% and ±10%. Moreover, at the power outputs (>400 W) produced by professional riders, this error is likely to be higher. This observation calls into question the validity of releasing individual values without reporting the range of random errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to evaluate an estimation system for rice yield in Brazil, based on simple agrometeorological models and on the technological level of production systems. This estimation system incorporates the conceptual basis proposed by Doorenbos & Kassam for potential and attainable yields with empirical adjusts for maximum yield and crop sensitivity to water deficit, considering five categories of rice yield. Rice yield was estimated from 2000/2001 to 2007/2008, and compared to IBGE yield data. Regression analyses between model estimates and data from IBGE surveys resulted in significant coefficients of determination, with less dispersion in the South than in the North and Northeast regions of the country. Index of model efficiency (E1') ranged from 0.01 in the lower yield classes to 0.45 in higher ones, and mean absolute error ranged from 58 to 250 kg ha‑1, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.