899 resultados para Models performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BackgroundThe aim of the present study was to evaluate the feasibility of using a telephone survey in gaining an understanding of the possible herd and management factors influencing the performance (i.e. safety and efficacy) of a vaccine against porcine circovirus type 2 (PCV2) in a large number of herds and to estimate customers¿ satisfaction.ResultsDatasets from 227 pig herds that currently applied or have applied a PCV2 vaccine were analysed. Since 1-, 2- and 3-site production systems were surveyed, the herds were allocated in one of two subsets, where only applicable variables out of 180 were analysed. Group 1 was comprised of herds with sows, suckling pigs and nursery pigs, whereas herds in Group 2 in all cases kept fattening pigs. Overall 14 variables evaluating the subjective satisfaction with one particular PCV2 vaccine were comingled to an abstract dependent variable for further models, which was characterized by a binary outcome from a cluster analysis: good/excellent satisfaction (green cluster) and moderate satisfaction (red cluster). The other 166 variables comprised information about diagnostics, vaccination, housing, management, were considered as independent variables. In Group 1, herds using the vaccine due to recognised PCV2 related health problems (wasting, mortality or porcine dermatitis and nephropathy syndrome) had a 2.4-fold increased chance (1/OR) of belonging to the green cluster. In the final model for Group 1, the diagnosis of diseases other than PCV2, the reason for vaccine administration being other than PCV2-associated diseases and using a single injection of iron had significant influence on allocating into the green cluster (P¿<¿0.05). In Group 2, only unchanged time or delay of time of vaccination influenced the satisfaction (P¿<¿0.05).ConclusionThe methodology and statistical approach used in this study were feasible to scientifically assess ¿satisfaction¿, and to determine factors influencing farmers¿ and vets¿ opinion about the safety and efficacy of a new vaccine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine the linkages between import policy and export performance, extending classic macroeconomic trade effects to more recent concepts from the modern literature on gravity models. We also examine these effects empirically with a panel of global and bilateral trade spanning 15 years. Our emphasis on the role of import policy (i.e. tariffs) of exporters as an explanation of trade volumes contrasts with the recent emphasis on importer policy in the gravity literature. It also reinforces the growing body of evidence on the importance of economic environmental (policy and infrastructure) conditions in explaining relative export performance and is in line with the literature on global value chains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most influential theoretical account in time psychophysics assumes the existence of a unitary internal clock based on neural counting. The distinct timing hypothesis, on the other hand, suggests an automatic timing mechanism for processing of durations in the sub-second range and a cognitively controlled timing mechanism for processing of durations in the range of seconds. Although several psychophysical approaches can be applied for identifying the internal structure of interval timing in the second and sub-second range, the existing data provide a puzzling picture of rather inconsistent results. In the present chapter, we introduce confirmatory factor analysis (CFA) to further elucidate the internal structure of interval timing performance in the sub-second and second range. More specifically, we investigated whether CFA would rather support the notion of a unitary timing mechanism or of distinct timing mechanisms underlying interval timing in the sub-second and second range, respectively. The assumption of two distinct timing mechanisms which are completely independent of each other was not supported by our data. The model assuming a unitary timing mechanism underlying interval timing in both the sub-second and second range fitted the empirical data much better. Eventually, we also tested a third model assuming two distinct, but functionally related mechanisms. The correlation between the two latent variables representing the hypothesized timing mechanisms was rather high and comparison of fit indices indicated that the assumption of two associated timing mechanisms described the observed data better than only one latent variable. Models are discussed in the light of the existing psychophysical and neurophysiological data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of performance expectancies for predicting behavior has long been highlighted in research on expectancy-value models. These models do not take into account that expectancies may vary in terms of their certainty. The study tested the following predictions: task experience leads to a higher certainty of expectancies; certainty and mean expectancies are empirically distinguishable; and expectancies held with high certainty are more accurate for predicting performance. 273 Grade 8 students reported their performance expectancy and the certainty of expectation with regard to a mathematics examination immediately before and after the examination. Actual grades on the examination were also assessed. The results supported the predictions: there was an increase in certainty between the two times of measurement; expectancies and certainty were unrelated at both times of measurement; and for students initially reporting higher certainty, the accuracy of the performance expectancy (i.e., the relation between expectancy and performance) was higher than for students reporting lower certainty. Given lower certainty, the accuracy increased after the students had experience with the examination. The data indicate that it may be useful to include certainty as an additional variable in expectancy-value models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of performance expectancies for the prediction of regulation of behavior and actual performance has long been established. Building on theories from the field of social cognition, we suggest that the level of performance expectancies, as well as the certainty of the expectancy, have a joint influence on an individual’s beliefs and behavior. In two studies (one cross sectional using a sample of secondary school students and one longitudinal using a sample of university students) we found that expectancies more strongly predicted persistence, and subsequent performance, the more certain the expectancy was. This pattern was found even if prior performance was controlled, as in Study 2. The data give an indication that it may be useful to include certainty as an additional variable in expectancy models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multidimensional talent models represent the current state of the art. However, it remains unclear how these different dimensions interact. Based on current theories of human development, person-oriented approaches seem to be particularly appropriate for talent research. The present study adopts this approach by looking at how a holistic system consisting of the different dimensions motivation, motor behaviour and the stage of development goes along with athletic performance. For this purpose, it has to be examined which patterns were formed by the constructs net hope (Elbe et al., 2003), motor abilities (3 motor tests; Höner et al., 2014), technical skills (3 motor tests; Höner et al., 2014) and the so far achieved percentage of the predicted adult height (Mirwald et al, 2002) and how these patterns are related to subsequent sporting success. 119 young elite football players were questioned and tested three times at intervals of one year, beginning at the age of 12. At the age of 15, the performance level the players had reached was examined (national, regional or no talent card). At all three measuring points, four patterns were identified which displayed partial structural and high individual stability. As expected, the players showing values above average in all factors were significantly more likely to advance to the highest performance level (Odds ratio = 2.2, p < .01). Physically strong, precocious developed players though having some technical weaknesses, have good chances to reach the middle performance level (OR = 1.6, p = .01). Players showing values under average, have an one and a half times higher probability to advance to the lowest performance level (p < .01). The results point to the importance of holistic approaches for the prediction of performance among promising football talents in the medium-term and thus provide valuable clues for their selection and promotion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous multicast research often makes commonly accepted but unverifed assumptions on network topologies and group member distribution in simulation studies. In this paper, we propose a framework to systematically evaluate multicast performance for different protocols. We identify a series of metrics, and carry out extensive simulation studies on these metrics with different topological models and group member distributions for three case studies. Our simulation results indicate that realistic topology and group membership models are crucial to accurate multicast performance evaluation. These results can provide guidance for multicast researchers to perform realistic simulations, and facilitate the design and development of multicast protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the relationship between house price levels, school performance, and the racial and ethnic composition of Connecticut school districts between 1995 and 2000. A panel of Connecticut school districts over both time and labor market areas is used to estimate a simultaneous equations model describing the determinants of these variables. Specifically, school district changes in price level, school performance, and racial and ethnic compositions depend upon each other, labor market wide changes in these variables, and the deviation of each school district from the overall metropolitan area. The specification is based on the differencing of dependent variables, as opposed to the use of level or fixed effects models and lagging level variables beyond the period over which change is considered; as a result the model is robust to persistence in the sample. Identification of the simultaneous system arises from the presence of multiple labor market areas in the sample, and the assumption that labor market changes in a variable due not directly influence the allocation of households across towns within a labor market area. We find that towns in labor markets that experience an inflow of minority households have greater increases in percent minority if those towns already ahve a substantial minoritypopulation. We find evidence that this sorting proces is reflected in housing price changes in the low priced segment of the housing market, not in the middle and upper segments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. The study reviewed one year of Texas hospital discharge data and Trauma Registry data for the 22 trauma services regions in Texas to identify regional variations in capacity, process of care and clinical outcomes for trauma patients, and analyze the statistical associations among capacity, process of care, and outcomes. ^ Methods. Cross sectional study design covering one year of state-wide Texas data. Indicators of trauma capacity, trauma care processes, and clinical outcomes were defined and data were collected on each indicator. Descriptive analyses were conducted of regional variations in trauma capacity, process of care, and clinical outcomes at all trauma centers, at Level I and II trauma centers and at Level III and IV trauma centers. Multilevel regression models were performed to test the relations among trauma capacity, process of care, and outcome measures at all trauma centers, at Level I and II trauma centers and at Level III and IV trauma centers while controlling for confounders such as age, gender, race/ethnicity, injury severity, level of trauma centers and urbanization. ^ Results. Significant regional variation was found among the 22 trauma services regions across Texas in trauma capacity, process of care, and clinical outcomes. The regional trauma bed rate, the average staffed bed per 100,000 varied significantly by trauma service region. Pre-hospital trauma care processes were significantly variable by region---EMS time, transfer time, and triage. Clinical outcomes including mortality, hospital and intensive care unit length of stay, and hospital charges also varied significantly by region. In multilevel regression analysis, the average trauma bed rate was significantly related to trauma care processes including ambulance delivery time, transfer time, and triage after controlling for age, gender, race/ethnicity, injury severity, level of trauma centers, and urbanization at all trauma centers. Transfer time only among processes of care was significant with the average trauma bed rate by region at Level III and IV. Also trauma mortality only among outcomes measures was significantly associated with the average trauma bed rate by region at all trauma centers. Hospital charges only among outcomes measures were statistically related to trauma bed rate at Level I and II trauma centers. The effect of confounders on processes and outcomes such as age, gender, race/ethnicity, injury severity, and urbanization was found significantly variable by level of trauma centers. ^ Conclusions. Regional variation in trauma capacity, process, and outcomes in Texas was extensive. Trauma capacity, age, gender, race/ethnicity, injury severity, level of trauma centers and urbanization were significantly associated with trauma process and clinical outcomes depending on level of trauma centers. ^ Key words: regionalized trauma systems, trauma capacity, pre-hospital trauma care, process, trauma outcomes, trauma performance, evaluation measures, regional variations ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the area under the receiver operating characteristic (AUC) is the most popular measure of the performance of prediction models, it has limitations, especially when it is used to evaluate the added discrimination of a new biomarker in the model. Pencina et al. (2008) proposed two indices, the net reclassification improvement (NRI) and integrated discrimination improvement (IDI), to supplement the improvement in the AUC (IAUC). Their NRI and IDI are based on binary outcomes in case-control settings, which do not involve time-to-event outcome. However, many disease outcomes are time-dependent and the onset time can be censored. Measuring discrimination potential of a prognostic marker without considering time to event can lead to biased estimates. In this dissertation, we have extended the NRI and IDI to survival analysis settings and derived the corresponding sample estimators and asymptotic tests. Simulation studies were conducted to compare the performance of the time-dependent NRI and IDI with Pencina’s NRI and IDI. For illustration, we have applied the proposed method to a breast cancer study.^ Key words: Prognostic model, Discrimination, Time-dependent NRI and IDI ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sensitivity of Interferon-γ release assays for detection of Mycobacterium tuberculosis (MTB) infection or disease is affected by conditions that depress host immunity (such as HIV). It is critical to determine whether these assays are affected by diabetes and related conditions (i.e. hyperglycemia, chronic hyperglycemia, or being overweight/obese) given that immune impairment is thought to underline susceptibility to tuberculosis (TB) in people with diabetes. This is important for tuberculosis control due to the millions of type 2 diabetes patients at risk for tuberculosis worldwide.^ The objective of this study was to identify host characteristics, including diabetes, that may affect the sensitivity of two commercially available Interferon-γ (IFN-γ) release assays (IGRA), the QuantiFERON®-TB Gold (QFT-G) and the T-SPOT®.TB in active TB patients. We further explored whether IFN-γ secretion in response to MTB antigens (ESAT-6 and CFP-10) is associated with diabetes and its defining characteristics (high blood glucose, high HbA1c, high BMI). To achieve these objectives, the sensitivity of QFT-G and T-SPOT. TB assays were evaluated in newly diagnosed, tuberculosis confirmed (by positive smear for acid fast bacilli and/or positive culture for MTB) adults enrolled at Texas and Mexico study sites between March 2006 and April 2009. Univariate and multivariate models were constructed to identify host characteristics associated with IGRA result and level of IFN-γ secretion.^ QFT-G was positive in 68% of tuberculosis patients. Those with diabetes, chronic hyperglycemia or obesity were more likely to have a positive QFT-G result, and to secrete higher levels of IFN-γ in response to the mycobacterial antigens (p<0.05). Previous history of BCG vaccination was the only other host characteristic associated with QFT-G result, whereby a higher proportion of non-BCG vaccinated persons were QFT-G positive, in comparison to vaccinated persons. In a separate group of patients, the T-SPOT.TB was 94% sensitive, with similar performance in all tuberculosis patients, regardless of host characteristics.^ In summary, we have demonstrated the validity of QFT-G and T-SPOT. TB to support the diagnosis of TB in patients with a range of host characteristics, but most notably in patients with diabetes. We also confirmed that TB patients with diabetes and associated characteristics (chronic hyperglycemia or BMI) secreted higher titers of IFN-γ when stimulated with MTB specific antigens, in comparison to patients without these characteristics. Together, these findings suggest that the mechanism by which diabetes increases risk to TB may not be explained by the inability to secrete IFN-γ, a key cytokine for TB control.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^