890 resultados para monotone estimating
Resumo:
This manuscript details a technique for estimating gesture accuracy within the context of motion-based health video games using the MICROSOFT KINECT. We created a physical therapy game that requires players to imitate clinically significant reference gestures. Player performance is represented by the degree of similarity between the performed and reference gestures and is quantified by collecting the Euler angles of the player's gestures, converting them to a three-dimensional vector, and comparing the magnitude between the vectors. Lower difference values represent greater gestural correspondence and therefore greater player performance. A group of thirty-one subjects was tested. Subjects achieved gestural correspondence sufficient to complete the game's objectives while also improving their ability to perform reference gestures accurately.
Resumo:
We explore the macroeconomic effects of a compression in the long-term bond yield spread within the context of the Great Recession of 2007–09 via a time-varying parameter structural VAR model. We identify a “pure” spread shock defined as a shock that leaves the policy rate unchanged, which allows us to characterize the macroeconomic consequences of a decline in the yield spread induced by central banks’ asset purchases within an environment in which the policy rate is constrained by the effective zero lower bound. Two key findings stand out. First, compressions in the long-term yield spread exert a powerful effect on both output growth and inflation. Second, conditional on available estimates of the impact of the Federal Reserve’s and the Bank of England’s asset purchase programs on long-term yield spreads, our counterfactual simulations suggest that U.S. and U.K. unconventional monetary policy actions have averted significant risks both of deflation and of output collapses comparable to those that took place during the Great Depression.
Resumo:
A study was performed to determine the feasability of using a measurement of ribeye depth (RED) from a longitudinal ultrasound image to estimate ribeye area (REA). The correlation between RED obtained with ultrasound and REA from a tracing was high for both implanted (r = .49) and non-implanted (r = .45) steers. The mean bias between predicted REA and actual REA was not different from zero. This analysis shows that RED could be an accurate indicator of REA.
Resumo:
Source verification and pooling of feeder cattle into larger lots resulted in higher selling prices compared to more typical sales at a southern Iowa auction market. After higher prices due to larger lot sizes were accounted for, cattle that received a specified management program and were source verified as to origin received additional price premiums. The data do not distinguish between the value of the specific management program and the value of the source verification process. However, cow–calf producers participating in the program took home more money.
Resumo:
Source verification and pooling of feeder cattle into larger lots resulted in higher selling prices compared with more typical sales at a southern Iowa auction market. After accounting for higher prices due to larger lot sizes, cattle that received a specified management program and were source verified as to origin received additional price premiums. The data do not distinguish between the value of the specific management program and the value of the source verification process. However, cow-calf producers participating in the program took home more money.
Resumo:
Fossil pollen data from stratigraphic cores are irregularly spaced in time due to non-linear age-depth relations. Moreover, their marginal distributions may vary over time. We address these features in a nonparametric regression model with errors that are monotone transformations of a latent continuous-time Gaussian process Z(T). Although Z(T) is unobserved, due to monotonicity, under suitable regularity conditions, it can be recovered facilitating further computations such as estimation of the long-memory parameter and the Hermite coefficients. The estimation of Z(T) itself involves estimation of the marginal distribution function of the regression errors. These issues are considered in proposing a plug-in algorithm for optimal bandwidth selection and construction of confidence bands for the trend function. Some high-resolution time series of pollen records from Lago di Origlio in Switzerland, which go back ca. 20,000 years are used to illustrate the methods.
Resumo:
Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.
Resumo:
In this article, we develop the a priori and a posteriori error analysis of hp-version interior penalty discontinuous Galerkin finite element methods for strongly monotone quasi-Newtonian fluid flows in a bounded Lipschitz domain Ω ⊂ ℝd, d = 2, 3. In the latter case, computable upper and lower bounds on the error are derived in terms of a natural energy norm, which are explicit in the local mesh size and local polynomial degree of the approximating finite element method. A series of numerical experiments illustrate the performance of the proposed a posteriori error indicators within an automatic hp-adaptive refinement algorithm.
Resumo:
OBJECTIVES: The aim of this study was to determine whether the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI)- or Cockcroft-Gault (CG)-based estimated glomerular filtration rates (eGFRs) performs better in the cohort setting for predicting moderate/advanced chronic kidney disease (CKD) or end-stage renal disease (ESRD). METHODS: A total of 9521 persons in the EuroSIDA study contributed 133 873 eGFRs. Poisson regression was used to model the incidence of moderate and advanced CKD (confirmed eGFR < 60 and < 30 mL/min/1.73 m(2) , respectively) or ESRD (fatal/nonfatal) using CG and CKD-EPI eGFRs. RESULTS: Of 133 873 eGFR values, the ratio of CG to CKD-EPI was ≥ 1.1 in 22 092 (16.5%) and the difference between them (CG minus CKD-EPI) was ≥ 10 mL/min/1.73 m(2) in 20 867 (15.6%). Differences between CKD-EPI and CG were much greater when CG was not standardized for body surface area (BSA). A total of 403 persons developed moderate CKD using CG [incidence 8.9/1000 person-years of follow-up (PYFU); 95% confidence interval (CI) 8.0-9.8] and 364 using CKD-EPI (incidence 7.3/1000 PYFU; 95% CI 6.5-8.0). CG-derived eGFRs were equal to CKD-EPI-derived eGFRs at predicting ESRD (n = 36) and death (n = 565), as measured by the Akaike information criterion. CG-based moderate and advanced CKDs were associated with ESRD [adjusted incidence rate ratio (aIRR) 7.17; 95% CI 2.65-19.36 and aIRR 23.46; 95% CI 8.54-64.48, respectively], as were CKD-EPI-based moderate and advanced CKDs (aIRR 12.41; 95% CI 4.74-32.51 and aIRR 12.44; 95% CI 4.83-32.03, respectively). CONCLUSIONS: Differences between eGFRs using CG adjusted for BSA or CKD-EPI were modest. In the absence of a gold standard, the two formulae predicted clinical outcomes with equal precision and can be used to estimate GFR in HIV-positive persons.
Resumo:
Long-term measurements of CO2 flux can be obtained using the eddy covariance technique, but these datasets are affected by gaps which hinder the estimation of robust long-term means and annual ecosystem exchanges. We compare results obtained using three gap-fill techniques: multiple regression (MR), multiple imputation (MI), and artificial neural networks (ANNs), applied to a one-year dataset of hourly CO2 flux measurements collected in Lutjewad, over a flat agriculture area near the Wadden Sea dike in the north of the Netherlands. The dataset was separated in two subsets: a learning and a validation set. The performances of gap-filling techniques were analysed by calculating statistical criteria: coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), maximum absolute error (MaxAE), and mean square bias (MSB). The gap-fill accuracy is seasonally dependent, with better results in cold seasons. The highest accuracy is obtained using ANN technique which is also less sensitive to environmental/seasonal conditions. We argue that filling gaps directly on measured CO2 fluxes is more advantageous than the common method of filling gaps on calculated net ecosystem change, because ANN is an empirical method and smaller scatter is expected when gap filling is applied directly to measurements.
Resumo:
Robot-assisted therapy has become increasingly common in neurorehabilitation. Sophisticated controllers have been developed for robots to assist and cooperate with the patient. It is difficult for the patient to judge to what extent the robot contributes to the execution of a movement. Therefore, methods to comprehensively quantify the patient's contribution and provide feedback are of key importance. We developed a method comprehensively to estimate the patient's contribution by combining kinematic measures and the motor assistance applied. Inverse dynamic models of the robot and the passive human arm calculate the required torques to move the robot and the arm and build, together with the recorded motor torque, a metric (in percentage) that represents the patient's contribution to the movement. To evaluate the developed metric, 12 nondisabled subjects and 7 patients with neurological problems simulated instructed movement contributions. The results are compared with a common performance metric. The estimation shows very satisfying results for both groups, even though the arm model used was strongly simplified. Displaying this metric to patients during therapy can potentially motivate them to actively participate in the training.
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^