892 resultados para estimating conditional probabilities
Resumo:
OBJECTIVES: The aim of this study was to determine whether the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI)- or Cockcroft-Gault (CG)-based estimated glomerular filtration rates (eGFRs) performs better in the cohort setting for predicting moderate/advanced chronic kidney disease (CKD) or end-stage renal disease (ESRD). METHODS: A total of 9521 persons in the EuroSIDA study contributed 133 873 eGFRs. Poisson regression was used to model the incidence of moderate and advanced CKD (confirmed eGFR < 60 and < 30 mL/min/1.73 m(2) , respectively) or ESRD (fatal/nonfatal) using CG and CKD-EPI eGFRs. RESULTS: Of 133 873 eGFR values, the ratio of CG to CKD-EPI was ≥ 1.1 in 22 092 (16.5%) and the difference between them (CG minus CKD-EPI) was ≥ 10 mL/min/1.73 m(2) in 20 867 (15.6%). Differences between CKD-EPI and CG were much greater when CG was not standardized for body surface area (BSA). A total of 403 persons developed moderate CKD using CG [incidence 8.9/1000 person-years of follow-up (PYFU); 95% confidence interval (CI) 8.0-9.8] and 364 using CKD-EPI (incidence 7.3/1000 PYFU; 95% CI 6.5-8.0). CG-derived eGFRs were equal to CKD-EPI-derived eGFRs at predicting ESRD (n = 36) and death (n = 565), as measured by the Akaike information criterion. CG-based moderate and advanced CKDs were associated with ESRD [adjusted incidence rate ratio (aIRR) 7.17; 95% CI 2.65-19.36 and aIRR 23.46; 95% CI 8.54-64.48, respectively], as were CKD-EPI-based moderate and advanced CKDs (aIRR 12.41; 95% CI 4.74-32.51 and aIRR 12.44; 95% CI 4.83-32.03, respectively). CONCLUSIONS: Differences between eGFRs using CG adjusted for BSA or CKD-EPI were modest. In the absence of a gold standard, the two formulae predicted clinical outcomes with equal precision and can be used to estimate GFR in HIV-positive persons.
Resumo:
Long-term measurements of CO2 flux can be obtained using the eddy covariance technique, but these datasets are affected by gaps which hinder the estimation of robust long-term means and annual ecosystem exchanges. We compare results obtained using three gap-fill techniques: multiple regression (MR), multiple imputation (MI), and artificial neural networks (ANNs), applied to a one-year dataset of hourly CO2 flux measurements collected in Lutjewad, over a flat agriculture area near the Wadden Sea dike in the north of the Netherlands. The dataset was separated in two subsets: a learning and a validation set. The performances of gap-filling techniques were analysed by calculating statistical criteria: coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), maximum absolute error (MaxAE), and mean square bias (MSB). The gap-fill accuracy is seasonally dependent, with better results in cold seasons. The highest accuracy is obtained using ANN technique which is also less sensitive to environmental/seasonal conditions. We argue that filling gaps directly on measured CO2 fluxes is more advantageous than the common method of filling gaps on calculated net ecosystem change, because ANN is an empirical method and smaller scatter is expected when gap filling is applied directly to measurements.
Resumo:
This paper addresses the issue of matching statistical and non-rigid shapes, and introduces an Expectation Conditional Maximization-based deformable shape registration (ECM-DSR) algorithm. Similar to previous works, we cast the statistical and non-rigid shape registration problem into a missing data framework and handle the unknown correspondences with Gaussian Mixture Models (GMM). The registration problem is then solved by fitting the GMM centroids to the data. But unlike previous works where equal isotropic covariances are used, our new algorithm uses heteroscedastic covariances whose values are iteratively estimated from the data. A previously introduced virtual observation concept is adopted here to simplify the estimation of the registration parameters. Based on this concept, we derive closed-form solutions to estimate parameters for statistical or non-rigid shape registrations in each iteration. Our experiments conducted on synthesized and real data demonstrate that the ECM-DSR algorithm has various advantages over existing algorithms.
Resumo:
Robot-assisted therapy has become increasingly common in neurorehabilitation. Sophisticated controllers have been developed for robots to assist and cooperate with the patient. It is difficult for the patient to judge to what extent the robot contributes to the execution of a movement. Therefore, methods to comprehensively quantify the patient's contribution and provide feedback are of key importance. We developed a method comprehensively to estimate the patient's contribution by combining kinematic measures and the motor assistance applied. Inverse dynamic models of the robot and the passive human arm calculate the required torques to move the robot and the arm and build, together with the recorded motor torque, a metric (in percentage) that represents the patient's contribution to the movement. To evaluate the developed metric, 12 nondisabled subjects and 7 patients with neurological problems simulated instructed movement contributions. The results are compared with a common performance metric. The estimation shows very satisfying results for both groups, even though the arm model used was strongly simplified. Displaying this metric to patients during therapy can potentially motivate them to actively participate in the training.
Resumo:
Statistical physicists assume a probability distribution over micro-states to explain thermodynamic behavior. The question of this paper is whether these probabilities are part of a best system and can thus be interpreted as Humean chances. I consider two strategies, viz. a globalist as suggested by Loewer, and a localist as advocated by Frigg and Hoefer. Both strategies fail because the system they are part of have rivals that are roughly equally good, while ontic probabilities should be part of a clearly winning system. I conclude with the diagnosis that well-defined micro-probabilities under-estimate the robust character of explanations in statistical physics.
Resumo:
The talk starts out with a short introduction to the philosophy of probability. I highlight the need to interpret probabilities in the sciences and motivate objectivist accounts of probabilities. Very roughly, according to such accounts, ascriptions of probabilities have truth-conditions that are independent of personal interests and needs. But objectivist accounts are pointless if they do not provide an objectivist epistemology, i.e., if they do not determine well-defined methods to support or falsify claims about probabilities. In the rest of the talk I examine recent philosophical proposals for an objectivist methodology. Most of them take up ideas well-known from statistics. I nevertheless find some proposals incompatible with objectivist aspirations.
Resumo:
How do probabilistic models represent their targets and how do they allow us to learn about them? The answer to this question depends on a number of details, in particular on the meaning of the probabilities involved. To classify the options, a minimalist conception of representation (Su\'arez 2004) is adopted: Modelers devise substitutes (``sources'') of their targets and investigate them to infer something about the target. Probabilistic models allow us to infer probabilities about the target from probabilities about the source. This leads to a framework in which we can systematically distinguish between different models of probabilistic modeling. I develop a fully Bayesian view of probabilistic modeling, but I argue that, as an alternative, Bayesian degrees of belief about the target may be derived from ontic probabilities about the source. Remarkably, some accounts of ontic probabilities can avoid problems if they are supposed to apply to sources only.
Resumo:
Cells infected with MuSVts110 express a viral RNA which contains an inherent conditional defect in RNA splicing. It has been shown previously that splicing of the MuSVts110 primary transcript is essential to morphological transformation of 6m2 cells in vitro. A growth temperature of 33$\sp\circ$C is permissive for viral RNA splicing,and, consequently, 6m2 cells appear morphologically transformed at this temperature. However, 6m2 cells appear phenotypically normal when incubated at 39$\sp\circ$C, the non-permissive temperature for viral RNA splicing.^ After a shift from 39$\sp\circ$C to 33$\sp\circ$C, the coordinate splicing of previously synthesized and newly transcribed MuSVts110 RNA was achieved. By S1 nuclease analysis of total RNA isolated at various times, 5$\sp\prime$ splice site cleavage of the MuSVts110 transcript appeared to occur 60 minutes after the shift to 33$\sp\circ$C, and 30 minutes prior to detectable exon ligation. In addition, consistent with the permissive temperatures and the kinetic timeframe of viral RNA splicing after a shift to 33$\sp\circ$C, four temperature sensitive blockades to primer extension were identified 26-75 bases upstream of the 3$\sp\prime$ splice site. These blockades likely reflect four branchpoint sequences utilized in the formation of MuSVts110 lariat splicing-intermediates.^ The 54-5A4 cell line is a spontaneous revertant of 6m2 cells and appears transformed at all growth temperatures. Primer extension sequence analysis has shown that a five base deletion occurred at the 3$\sp\prime$ splice site in MuSVts110 RNA allowing the expression of a viral transforming protein in 54-5A4 in the absence of RNA splicing, whereas in the parental 6m2 cell line, a splicing event is necessary to generate a similar transforming protein. As a consequence of this deletion, splicing cannot occur and the formation of the four MuSVts110 branched-intermediates were not observed at any temperature in 54-5A4 cells. However, 5$\sp\prime$ splice site cleavage was still detected at 33$\sp\circ$C.^ Finally, we have investigated the role of the 1488 bp deletion which occurred in the generation of MuSVts110 in the activation of temperature sensitive viral RNA splicing. This deletion appears solely responsible for splice site activation. Whether intron size is the crucial factor in MuSVts110 RNA splicing or whether inhibitory sequences were removed by the deletion is currently unknown. (Abstract shortened with permission of author.) ^
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
The spatial distribution of the American lobster Homarus americanus is influenced by many factors, which are often difficult to quantify. We implemented a modeling approach for quantifying season-, size-, and sex-specific lobster spatial distribution in the Gulf of Maine with respect to environmental and spatial variables including bottom temperature, bottom salinity, latitude, longitude, depth, distance offshore, and 2 substratum features. Lobster distribution was strongly associated with temperature and depth, and differed seasonally by sex. In offshore waters in the fall, females were dominant at higher latitudes and males at lower latitudes. This segregation was not apparent in the spring although females were still dominant at higher latitudes in offshore waters. Juveniles and adults were also distributed differently; juveniles were more abundant at the lower latitudes in inshore waters, while adults were more widespread along the entire coast. These patterns are consistent with the ecology of the American lobster. This study provides a tool to evaluate changes in lobster spatial distribution with respect to changes in key habitat and other environmental variables, and consequently could be of value for the management of the American lobster.