988 resultados para statistical estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Estimation of glomerular filtration rate (eGFR) using a common formula for both adult and pediatric populations is challenging. Using inulin clearances (iGFRs), this study aims to investigate the existence of a precise age cutoff beyond which the Modification of Diet in Renal Disease (MDRD), the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), or the Cockroft-Gault (CG) formulas, can be applied with acceptable precision. Performance of the new Schwartz formula according to age is also evaluated. METHOD: We compared 503 iGFRs for 503 children aged between 33 months and 18 years to eGFRs. To define the most precise age cutoff value for each formula, a circular binary segmentation method analyzing the formulas' bias values according to the children's ages was performed. Bias was defined by the difference between iGFRs and eGFRs. To validate the identified cutoff, 30% accuracy was calculated. RESULTS: For MDRD, CKD-EPI and CG, the best age cutoff was ≥14.3, ≥14.2 and ≤10.8 years, respectively. The lowest mean bias and highest accuracy were -17.11 and 64.7% for MDRD, 27.4 and 51% for CKD-EPI, and 8.31 and 77.2% for CG. The Schwartz formula showed the best performance below the age of 10.9 years. CONCLUSION: For the MDRD and CKD-EPI formulas, the mean bias values decreased with increasing child age and these formulas were more accurate beyond an age cutoff of 14.3 and 14.2 years, respectively. For the CG and Schwartz formulas, the lowest mean bias values and the best accuracies were below an age cutoff of 10.8 and 10.9 years, respectively. Nevertheless, the accuracies of the formulas were still below the National Kidney Foundation Kidney Disease Outcomes Quality Initiative target to be validated in these age groups and, therefore, none of these formulas can be used to estimate GFR in children and adolescent populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: Estimating the effect of a nursing intervention in home-dwelling older adults on the occurrence and course of delirium and concomitant cognitive and functional impairment. METHODS: A randomized clinical pilot trial using a before/after design was conducted with older patients discharged from hospital who had a medical prescription to receive home care. A total of 51 patients were randomized into the experimental group (EG) and 52 patients into the control group (CG). Besides usual home care, nursing interventions were offered by a geriatric nurse specialist to the EG at 48 h, 72 h, 7 days, 14 days, and 21 days after discharge. All patients were monitored for symptoms of delirium using the Confusion Assessment Method. Cognitive and functional statuses were measured with the Mini-Mental State Examination and the Katz and Lawton Index. RESULTS: No statistical differences with regard to symptoms of delirium (p = 0.085), cognitive impairment (p = 0.151), and functional status (p = 0.235) were found between the EG and CG at study entry and at 1 month. After adjustment, statistical differences were found in favor of the EG for symptoms of delirium (p = 0.046), cognitive impairment (p = 0.015), and functional status (p = 0.033). CONCLUSION: Nursing interventions to detect delirium at home are feasible and accepted. The nursing interventions produced a promising effect to improve delirium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions regarding intervention effectiveness in single-case designs. Ordinary least squares estimation is compared to two correction techniques dealing with general trend and one eliminating autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approximate the nominal ones in presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most current methods for adult skeletal age-at-death estimation are based on American samples comprising individuals of European and African ancestry. Our limited understanding of population variability hampers our efforts to apply these techniques to various skeletal populations around the world, especially in global forensic contexts. Further, documented skeletal samples are rare, limiting our ability to test our techniques. The objective of this paper is to test three pelvic macroscopic methods (1-Suchey-Brooks; 2- Lovejoy; 3- Buckberry and Chamberlain) on a documented modern Spanish sample. These methods were selected because they are popular among Spanish anthropologists and because they never have been tested in a Spanish sample. The study sample consists of 80 individuals (55 ♂ and 25 ♀) of known sex and age from the Valladolid collection. Results indicate that in all three methods, levels of bias and inaccuracy increase with age. The Lovejoy method performs poorly (27%) compared with Suchey-Brooks (71%) and Buckberry and Chamberlain (86%). However, the levels of correlation between phases and chronological ages are low and comparable in the three methods (< 0.395). The apparent accuracy of the Suchey-Brooks and Buckberry and Chamberlain methods is largely based on the broad width of the methods" estimated intervals. This study suggests that before systematic application of these three methodologies in Spanish populations, further statistical modeling and research into the co-variance of chronological age with morphological change is necessary. Future methods should be developed specific to various world populations, and should allow for both precision and flexibility in age estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the rise of criminal, civil and administrative judicial situations involving people lacking valid identity documents, age estimation of living persons has become an important operational procedure for numerous forensic and medicolegal services worldwide. The chronological age of a given person is generally estimated from the observed degree of maturity of some selected physical attributes by means of statistical methods. However, their application in the forensic framework suffers from some conceptual and practical drawbacks, as recently claimed in the specialised literature. The aim of this paper is therefore to offer an alternative solution for overcoming these limits, by reiterating the utility of a probabilistic Bayesian approach for age estimation. This approach allows one to deal in a transparent way with the uncertainty surrounding the age estimation process and to produce all the relevant information in the form of posterior probability distribution about the chronological age of the person under investigation. Furthermore, this probability distribution can also be used for evaluating in a coherent way the possibility that the examined individual is younger or older than a given legal age threshold having a particular legal interest. The main novelty introduced by this work is the development of a probabilistic graphical model, i.e. a Bayesian network, for dealing with the problem at hand. The use of this kind of probabilistic tool can significantly facilitate the application of the proposed methodology: examples are presented based on data related to the ossification status of the medial clavicular epiphysis. The reliability and the advantages of this probabilistic tool are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The degree of fusion at the anterior aspect of the sacral vertebrae has been scored in 242 male and female skeletons from the Lisbon documented collection, ranging in age from 16 to 59 years old. Statistical tests indicate a sex difference towards earlier fusion in young females compared with young males, as well as a clear association between degree of fusion and age. Similar results have been found in documented skeletal samples from Coimbra and Sassari, and the recommendations stated by these authors regarding age estimation have been positively tested in the Lisbon collection. Although more research from geographically diverse samples is required, a general picture of the pattern of sacral fusion and its associations with age and sex is emerging. We also provide a practical example of the usefulness of the sacrum in age estimation in a forensic setting, a mass grave from the Spanish Civil War. It is concluded that the scoring of the degree of fusion of the sacral vertebrae, specially of S1-2, can be a simple tool for assigning skeletons to broad age groups, and it should be implemented as another resource for age estimation in the study of human skeletal remains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiation balance is the fraction of incident solar radiation upon earth surface which is available to be used in several natural processes, such as biological metabolism, water loss by vegetated surfaces, variation of temperature in farming systems and organic decomposition. The present study aimed to assess and validate the performance of two estimation models for Rn in Ponta Grossa city, Paraná State, Brazil. To this end, during the period of 04/01/2008 to 04/30/2011, from radiometric data collected by an automatic weather station set at the Experimental Station, of the State University of Ponta Grossa. We performed a linear regression study by confrontation between measurements made through radiometric balance and Rn estimates obtained from Brunt classical method, and the proposed method. Both models showed excellent performance and were confirmed by the statistical parameters applied. However, the alternative method has the advantage of requiring only global solar radiation values, temperature, and relative humidity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Design of flight control laws, verification of performance predictions, and the implementation of flight simulations are tasks that require a mathematical model of the aircraft dynamics. The dynamical models are characterized by coefficients (aerodynamic derivatives) whose values must be determined from flight tests. This work outlines the use of the Extended Kalman Filter (EKF) in obtaining the aerodynamic derivatives of an aircraft. The EKF shows several advantages over the more traditional least-square method (LS). Among these the most important are: there are no restrictions on linearity or in the form which the parameters appears in the mathematical model describing the system, and it is not required that these parameters be time invariant. The EKF uses the statistical properties of the process and the observation noise, to produce estimates based on the mean square error of the estimates themselves. Differently, the LS minimizes a cost function based on the plant output behavior. Results for the estimation of some longitudinal aerodynamic derivatives from simulated data are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of limiting dilution assay (LDA) for assessing the frequency of responders in a cell population is a method extensively used by immunologists. A series of studies addressing the statistical method of choice in an LDA have been published. However, none of these studies has addressed the point of how many wells should be employed in a given assay. The objective of this study was to demonstrate how a researcher can predict the number of wells that should be employed in order to obtain results with a given accuracy, and, therefore, to help in choosing a better experimental design to fulfill one's expectations. We present the rationale underlying the expected relative error computation based on simple binomial distributions. A series of simulated in machina experiments were performed to test the validity of the a priori computation of expected errors, thus confirming the predictions. The step-by-step procedure of the relative error estimation is given. We also discuss the constraints under which an LDA must be performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our objective is to develop a diffusion Monte Carlo (DMC) algorithm to estimate the exact expectation values, ($o|^|^o), of multiplicative operators, such as polarizabilities and high-order hyperpolarizabilities, for isolated atoms and molecules. The existing forward-walking pure diffusion Monte Carlo (FW-PDMC) algorithm which attempts this has a serious bias. On the other hand, the DMC algorithm with minimal stochastic reconfiguration provides unbiased estimates of the energies, but the expectation values ($o|^|^) are contaminated by ^, an user specified, approximate wave function, when A does not commute with the Hamiltonian. We modified the latter algorithm to obtain the exact expectation values for these operators, while at the same time eliminating the bias. To compare the efficiency of FW-PDMC and the modified DMC algorithms we calculated simple properties of the H atom, such as various functions of coordinates and polarizabilities. Using three non-exact wave functions, one of moderate quality and the others very crude, in each case the results are within statistical error of the exact values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The attached file is created with Scientific Workplace Latex

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le modèle GARCH à changement de régimes est le fondement de cette thèse. Ce modèle offre de riches dynamiques pour modéliser les données financières en combinant une structure GARCH avec des paramètres qui varient dans le temps. Cette flexibilité donne malheureusement lieu à un problème de path dependence, qui a empêché l'estimation du modèle par le maximum de vraisemblance depuis son introduction, il y a déjà près de 20 ans. La première moitié de cette thèse procure une solution à ce problème en développant deux méthodologies permettant de calculer l'estimateur du maximum de vraisemblance du modèle GARCH à changement de régimes. La première technique d'estimation proposée est basée sur l'algorithme Monte Carlo EM et sur l'échantillonnage préférentiel, tandis que la deuxième consiste en la généralisation des approximations du modèle introduites dans les deux dernières décennies, connues sous le nom de collapsing procedures. Cette généralisation permet d'établir un lien méthodologique entre ces approximations et le filtre particulaire. La découverte de cette relation est importante, car elle permet de justifier la validité de l'approche dite par collapsing pour estimer le modèle GARCH à changement de régimes. La deuxième moitié de cette thèse tire sa motivation de la crise financière de la fin des années 2000 pendant laquelle une mauvaise évaluation des risques au sein de plusieurs compagnies financières a entraîné de nombreux échecs institutionnels. À l'aide d'un large éventail de 78 modèles économétriques, dont plusieurs généralisations du modèle GARCH à changement de régimes, il est démontré que le risque de modèle joue un rôle très important dans l'évaluation et la gestion du risque d'investissement à long terme dans le cadre des fonds distincts. Bien que la littérature financière a dévoué beaucoup de recherche pour faire progresser les modèles économétriques dans le but d'améliorer la tarification et la couverture des produits financiers, les approches permettant de mesurer l'efficacité d'une stratégie de couverture dynamique ont peu évolué. Cette thèse offre une contribution méthodologique dans ce domaine en proposant un cadre statistique, basé sur la régression, permettant de mieux mesurer cette efficacité.