37 resultados para Random error

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reliability of measurement refers to unsystematic error in observed responses. Investigations of the prevalence of random error in stated estimates of willingness to pay (WTP) are important to an understanding of why tests of validity in CV can fail. However, published reliability studies have tended to adopt empirical methods that have practical and conceptual limitations when applied to WTP responses. This contention is supported in a review of contingent valuation reliability studies that demonstrate important limitations of existing approaches to WTP reliability. It is argued that empirical assessments of the reliability of contingent values may be better dealt with by using multiple indicators to measure the latent WTP distribution. This latent variable approach is demonstrated with data obtained from a WTP study for stormwater pollution abatement. Attitude variables were employed as a way of assessing the reliability of open-ended WTP (with benchmarked payment cards) for stormwater pollution abatement. The results indicated that participants' decisions to pay were reliably measured, but not the magnitude of the WTP bids. This finding highlights the need to better discern what is actually being measured in VVTP studies, (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To compare the population modelling programs NONMEM and P-PHARM during investigation of the pharmacokinetics of tacrolimus in paediatric liver-transplant recipients. Methods: Population pharmacokinetic analysis was performed using NONMEM and P-PHARM on retrospective data from 35 paediatric liver-transplant patients receiving tacrolimus therapy. The same data were presented to both programs. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F). Covariates screened for influence on these parameters were weight, age, gender, post-operative day, days of tacrolimus therapy, transplant type, biliary reconstructive procedure, liver function tests, creatinine clearance, haematocrit, corticosteroid dose, and potential interacting drugs. Results: A satisfactory model was developed in both programs with a single categorical covariate - transplant type - providing stable parameter estimates and small, normally distributed (weighted) residuals. In NONMEM, the continuous covariates - age and liver function tests - improved modelling further. Mean parameter estimates were CL/F (whole liver) = 16.3 1/h, CL/F (cut-down liver) = 8.5 1/h and V/F = 565 1 in NONMEM, and CL/F = 8.3 1/h and V/F = 155 1 in P-PHARM. Individual Bayesian parameter estimates were CL/F (whole liver) = 17.9 +/- 8.8 1/h, CL/F (cutdown liver) = 11.6 +/- 18.8 1/h and V/F = 712 792 1 in NONMEM, and CL/F (whole liver) = 12.8 +/- 3.5 1/h, CL/F (cut-down liver) = 8.2 +/- 3.4 1/h and V/F = 221 1641 in P-PHARM. Marked interindividual kinetic variability (38-108%) and residual random error (approximately 3 ng/ml) were observed. P-PHARM was more user friendly and readily provided informative graphical presentation of results. NONMEM allowed a wider choice of errors for statistical modelling and coped better with complex covariate data sets. Conclusion: Results from parametric modelling programs can vary due to different algorithms employed to estimate parameters, alternative methods of covariate analysis and variations and limitations in the software itself.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Patient outcomes in transplantation would improve if dosing of immunosuppressive agents was individualized. The aim of this study is to develop a population pharmacokinetic model of tacrolimus in adult liver transplant recipients and test this model in individualizing therapy. Population analysis was performed on data from 68 patients. Estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F) using the nonlinear mixed effects model program (NONMEM). Factors screened for influence on these parameters were weight, age, sex, transplant type, biliary reconstructive procedure, postoperative day, days of therapy, liver function test results, creatinine clearance, hematocrit, corticosteroid dose, and interacting drugs. The predictive performance of the developed model was evaluated through Bayesian forecasting in an independent cohort of 36 patients. No linear correlation existed between tacrolimus dosage and trough concentration (r(2) = 0.005). Mean individual Bayesian estimates for CL/F and V/F were 26.5 8.2 (SD) L/hr and 399 +/- 185 L, respectively. CL/F was greater in patients with normal liver function. V/F increased with patient weight. CL/F decreased with increasing hematocrit. Based on the derived model, a 70-kg patient with an aspartate aminotransferase (AST) level less than 70 U/L would require a tacrolimus dose of 4.7 mg twice daily to achieve a steady-state trough concentration of 10 ng/mL. A 50-kg patient with an AST level greater than 70 U/L would require a dose of 2.6 mg. Marked interindividual variability (43% to 93%) and residual random error (3.3 ng/mL) were observed. Predictions made using the final model were reasonably nonbiased (0.56 ng/mL), but imprecise (4.8 ng/mL). Pharmacokinetic information obtained will assist in tacrolimus dosing; however, further investigation into reasons for the pharmacokinetic variability of tacrolimus is required.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Hospital performance reports based on administrative data should distinguish differences in quality of care between hospitals from case mix related variation and random error effects. A study was undertaken to determine which of 12 diagnosis-outcome indicators measured across all hospitals in one state had significant risk adjusted systematic ( or special cause) variation (SV) suggesting differences in quality of care. For those that did, we determined whether SV persists within hospital peer groups, whether indicator results correlate at the individual hospital level, and how many adverse outcomes would be avoided if all hospitals achieved indicator values equal to the best performing 20% of hospitals. Methods: All patients admitted during a 12 month period to 180 acute care hospitals in Queensland, Australia with heart failure (n = 5745), acute myocardial infarction ( AMI) ( n = 3427), or stroke ( n = 2955) were entered into the study. Outcomes comprised in-hospital deaths, long hospital stays, and 30 day readmissions. Regression models produced standardised, risk adjusted diagnosis specific outcome event ratios for each hospital. Systematic and random variation in ratio distributions for each indicator were then apportioned using hierarchical statistical models. Results: Only five of 12 (42%) diagnosis-outcome indicators showed significant SV across all hospitals ( long stays and same diagnosis readmissions for heart failure; in-hospital deaths and same diagnosis readmissions for AMI; and in-hospital deaths for stroke). Significant SV was only seen for two indicators within hospital peer groups ( same diagnosis readmissions for heart failure in tertiary hospitals and inhospital mortality for AMI in community hospitals). Only two pairs of indicators showed significant correlation. If all hospitals emulated the best performers, at least 20% of AMI and stroke deaths, heart failure long stays, and heart failure and AMI readmissions could be avoided. Conclusions: Diagnosis-outcome indicators based on administrative data require validation as markers of significant risk adjusted SV. Validated indicators allow quantification of realisable outcome benefits if all hospitals achieved best performer levels. The overall level of quality of care within single institutions cannot be inferred from the results of one or a few indicators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate here a modification of the discrete random pore model [Bhatia SK, Vartak BJ, Carbon 1996;34:1383], by including an additional rate constant which takes into account the different reactivity of the initial pore surface having attached functional groups and hydrogens, relative to the subsequently exposed surface. It is observed that the relative initial reactivity has a significant effect on the conversion and structural evolution, underscoring the importance of initial surface chemistry. The model is tested against experimental data on chemically controlled char oxidation and steam gasification at various temperatures. It is seen that the variations of the reaction rate and surface area with conversion are better represented by the present approach than earlier random pore models. The results clearly indicate the improvement of model predictions in the low conversion region, where the effect of the initially attached functional groups and hydrogens is more significant, particularly for char oxidation. It is also seen that, for the data examined, the initial surface chemistry is less important for steam gasification as compared to the oxidation reaction. Further development of the approach must also incorporate the dynamics of surface complexation, which is not considered here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes a method to turn astronomical imaging into a random number generator by using the positions of incident cosmic rays and hot pixels to generate bit streams. We subject the resultant bit streams to a battery of standard benchmark statistical tests for randomness and show that these bit streams are statistically the same as a perfect random bit stream. Strategies for improving and building upon this method are outlined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic markers that distinguish fungal genotypes are important tools for genetic analysis of heterokaryosis and parasexual recombination in fungi. Random amplified polymorphic DNA (RAPD) markers that distinguish two races of biotype B of Colletotrichum gloeosporioides infecting the legume Stylosanthes guianensis were sought. Eighty-five arbitrary oligonucleotide primers were used to generate 895 RAPD bands but only two bands were found to be specifically amplified from DNA of the race 3 isolate. These two RAPD bands were used as DNA probes and hybridised only to DNA of the race 3 isolate. Both RAPD bands hybridised to a dispensable 1.2 Mb chromosome of the race 3 isolate. No other genotype-specific chromosomes or DNA sequences were identified in either the race 2 or race 3 isolates. The RAPD markers hybridised to a 2 Mb chromosome in all races of the genetically distinct biotype A pathogen which infects other species of Stylosanthes as well as S. guianensis. The experiments indicate that RAPD analysis is a potentially useful tool for obtaining genotype-and chromosome-specific DNA probes in closely related isolates of one biotype of this fungal pathogen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Perceived depth was measured for three-types of stereograms with the colour/texture of half-occluded (monocular) regions either similar to or dissimilar to that of binocular regions or background. In a two-panel random dot stereogram the monocular region was filled with texture either similar or different to the far panel or left blank. In unpaired background stereograms the monocular region either matched the background or was different in colour or texture and in phantom stereograms the monocular region matched the partially occluded object or was a different colour or texture. In all three cases depth was considerably impaired when the monocular texture did not match either the background or the more distant surface. The content and context of monocular regions as well as their position are important in determining their role as occlusion cues and thus in three-dimensional layout. We compare coincidence and accidental view accounts of these effects. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss quantum error correction for errors that occur at random times as described by, a conditional Poisson process. We shoo, how a class of such errors, detected spontaneous emission, can be corrected by continuous closed loop, feedback.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new conceptual model for soil pore-solid structure is formalized. Soil pore-solid structure is proposed to comprise spatially abutting elements each with a value which is its membership to the fuzzy set ''pore,'' termed porosity. These values have a range between zero (all solid) and unity (all pore). Images are used to represent structures in which the elements are pixels and the value of each is a porosity. Two-dimensional random fields are generated by allocating each pixel a porosity by independently sampling a statistical distribution. These random fields are reorganized into other pore-solid structural types by selecting parent points which have a specified local region of influence. Pixels of larger or smaller porosity are aggregated about the parent points and within the region of interest by controlled swapping of pixels in the image. This creates local regions of homogeneity within the random field. This is similar to the process known as simulated annealing. The resulting structures are characterized using one-and two-dimensional variograms and functions describing their connectivity. A variety of examples of structures created by the model is presented and compared. Extension to three dimensions presents no theoretical difficulties and is currently under development.