968 resultados para Imputation model approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two studies investigated interactions between health providers and patients, using Semin and Fiedler's linguistic category model. In Study 1 the linguistic category model was used to examine perceptions of the levels of linguistic intergroup bias in descriptions of conversations with health professionals in hospitals. Results indicated a favourable linguistic bias toward health professionals in satisfactory conversations but low levels of linguistic intergroup bias in unsatisfactory conversations. In Study 2, the language of patients and health professionals in videotaped interactions was examined for levels of linguistic intergroup bias. Interpersonally salient interactions showed less linguistic intergroup bias than did intergroup ones. Results also indicate that health professionals have high levels of control in all types of medical encounters with patients. Nevertheless, the extent to which patients are able to interact with health professionals as individuals, rather than only as professionals is a key determinant of satisfaction with the interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes the main research results in a new methodology, in which the stages and strategies of the technology integration process are identified and described. A set of principles and recommendations are therefore presented. The MIPO model described in this paper is a result of the effort made regarding the understanding of the main success features of good practices, in the web environment, integrated in the information systems/information technology context. The initial model has been created, based on experiences and literature review. After that, it was tested in the information and technology system units at higher school and also adapted as a result of four cycles of an actionresearch work combined with a case study research. The information, concepts and procedures presented here give support to teachers and instructors, instructional designers and planning teams – anyone who wants to develop effective b‐learning instructions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tiivistelmä: Pituusboniteettisovellus ojitusalueiden metsille

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finite cluster models and a variety of ab initio wave functions have been used to study the electronic structure of bulk KNiF3. Several electronic states, including the ground state and some charge-transfer excited states, have been considered. The study of the cluster-model wave functions has permitted an understanding of the nature of the chemical bond in the electronic ground state. This is found to be highly ionic and the different ionic and covalent contributions to the bonding have been identified and quantified. Finally, we have studied the charge-transfer excited states leading to the optical gap and have found that calculated and experimental values are in good agreement. The wave functions corresponding to these excited states have also been analyzed and show that although KNiF3 may be described as a ligand-to-metal charge-transfer insulator there is a strong configuration mixing with the metal-to-metal charge-transfer states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample.Results: Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace.Conclusion: Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: A homozygous mutation in the H6 family homeobox 1 (HMX1) gene is responsible for a new oculoauricular defect leading to eye and auricular developmental abnormalities as well as early retinal degeneration (MIM 612109). However, the HMX1 pathway remains poorly understood, and in the first approach to better understand the pathway's function, we sought to identify the target genes. METHODS: We developed a predictive promoter model (PPM) approach using a comparative transcriptomic analysis in the retina at P15 of a mouse model lacking functional Hmx1 (dmbo mouse) and its respective wild-type. This PPM was based on the hypothesis that HMX1 binding site (HMX1-BS) clusters should be more represented in promoters of HMX1 target genes. The most differentially expressed genes in the microarray experiment that contained HMX1-BS clusters were used to generate the PPM, which was then statistically validated. Finally, we developed two genome-wide target prediction methods: one that focused on conserving PPM features in human and mouse and one that was based on the co-occurrence of HMX1-BS pairs fitting the PPM, in human or in mouse, independently. RESULTS: The PPM construction revealed that sarcoglycan, gamma (35kDa dystrophin-associated glycoprotein) (Sgcg), teashirt zinc finger homeobox 2 (Tshz2), and solute carrier family 6 (neurotransmitter transporter, glycine) (Slc6a9) genes represented Hmx1 targets in the mouse retina at P15. Moreover, the genome-wide target prediction revealed that mouse genes belonging to the retinal axon guidance pathway were targeted by Hmx1. Expression of these three genes was experimentally validated using a quantitative reverse transcription PCR approach. The inhibitory activity of Hmx1 on Sgcg, as well as protein tyrosine phosphatase, receptor type, O (Ptpro) and Sema3f, two targets identified by the PPM, were validated with luciferase assay. CONCLUSIONS: Gene expression analysis between wild-type and dmbo mice allowed us to develop a PPM that identified the first target genes of Hmx1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prediction of the stock market valuation is a common interest to all market participants. Theoretically sound market valuation can be achieved by discounting future earnings of equities to present. Competing valuation models seek to find variables that affect the equity market valuation in a way that the market valuation can be explained and also variables that could be used to predict market valuation. In this paper we test the contemporaneous relationship between stock prices, forward looking earnings and long-term government bond yields. We test this so-called Fed model in a long- and short-term time series analysis. In order to test the dynamics of the relationship, we use the cointegration framework. The data used in this study spans over four decades of various market conditions between 1964-2007, using data from United States. The empirical results of our analysis do not give support for the Fed model. We are able to show that the long-term government bonds do not play statistically significant role in this relationship. The effect of forward earnings yield on the stock market prices is significant and thus we suggest the use of standard valuation ratios when trying to predict the future paths of equity prices. Also, changes in the long-term government bond yields do not have significant short-term impact on stock prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ethnographic methods were used to study a weekly after-school physical activity program over an eight-month period. Based on Hellison’s Teaching Personal and Social Responsibility (TPSR) model, the program sought to foster positive life skills amongst youth. The study investigated how the developed program influenced this life skills education experience. Several themes were identified from the data revolving around culture, life skills, pedagogy, and lessons learned. Data suggests that the positive environment developed within the program positively influenced youths’ life skill education experience. The topic of ethnicity as it relates to the experience of marginalized youth in physical activity settings is also discussed. This study supports TPSR literature and suggests that effort to establish caring relationships and empower youth contribute to the establishment of a positive atmosphere where life skills education can occur. Beyond this, practical tools were developed through this study to help others deliver life skill education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Economic theory makes no predictions about social factors affecting decisions under risk. We examine situations in which a decision maker decides for herself and another person under conditions of payoff equality, and compare them to individual decisions. By estimating a structural model, we find that responsibility leaves utility curvature unaffected, but accentuates the subjective distortion of very small and very large probabilities for both gains and losses. We also find that responsibility reduces loss aversion, but that these results only obtain under some specific definitions of the latter. These results serve to generalize and reconcile some of the still largely contradictory findings in the literature. They also have implications for financial agency, which we discuss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Economists and policymakers have long been concerned with increasing the supply of health professionals in rural and remote areas. This work seeks to understand which factors influence physicians’ choice of practice location right after completing residency. Differently from previous papers, we analyse the Brazilian missalocation and assess the particularities of developing countries. We use a discrete choice model approach with a multinomial logit specification. Two rich databases are employed containing the location and wage of formally employed physicians as well as details from their post-graduation. Our main findings are that amenities matter, physicians have a strong tendency to remain in the region they completed residency and salaries are significant in the choice of urban, but not rural, communities. We conjecture this is due to attachments built during training and infrastructure concerns.