30 resultados para score test information matrix artificial regression
Resumo:
Multiple regression analysis is a statistical technique which allows to predict a dependent variable from m ore than one independent variable and also to determine influential independent variables. Using experimental data, in this study the multiple regression analysis is applied to predict the room mean velocity and determine the most influencing parameters on the velocity. More than 120 experiments for four different heat source locations were carried out in a test chamber with a high level wall mounted air supply terminal at air change rates 3-6 ach. The influence of the environmental parameters such as supply air momentum, room heat load, Archimedes number and local temperature ratio, were examined by two methods: a simple regression analysis incorporated into scatter matrix plots and multiple stepwise regression analysis. It is concluded that, when a heat source is located along the jet centre line, the supply momentum mainly influences the room mean velocity regardless of the plume strength. However, when the heat source is located outside the jet region, the local temperature ratio (the inverse of the local heat removal effectiveness) is a major influencing parameter.
Resumo:
Vitamin E absorption requires the presence of fat; however, limited information exists on the influence of fat quantity on optimal absorption. In the present study we compared the absorption of stable-isotope-labelled vitamin E following meals of varying fat content and source. In a randomised four-way cross-over study, eight healthy individuals consumed a capsule containing 150 mg H-2-labelled RRR-alpha-tocopheryl acetate with a test meal of toast with butter (17.5 g fat), cereal with full-fat milk (17.5 g fat), cereal with semi-skimmed milk (2.7 g fat) and water (0g fat). Blood was taken at 0, 0.5, 1, 1.5, 2, 3, 6 and 9 h following ingestion, chylomicrons were isolated, and H-2-labelled alpha-tocopherol was analysed in the chylomicron and plasma samples. There was a significant time (P<0.001) and treatment effect (P<0.001) in H-2-labelled alpha-tocopherol concentration in both chylomicrons and plasma between the test meals. H-2-labelled alpha-tocopherol concentration was significantly greater with the higher-fat toast and butter meal compared with the low-fat cereal meal or water (P< 0.001), and a trend towards greater concentration compared with the high-fat cereal meal (P= 0.065). There was significantly greater H-2-labelled α-tocopherol concentration with the high-fat cereal meal compared with the low-fat cereal meal (P< 0.05). The H-2-labelled alpha-tocopherol concentration following either the low-fat cereal meal or water was low. These results demonstrate that both the amount of fat and the food matrix influence vitamin E absorption. These factors should be considered by consumers and for future vitamin E intervention studies.
Resumo:
If soy isoflavones are to be effective in preventing or treating a range of diseases, they must be bioavailable, and thus understanding factors which may alter their bioavailability needs to be elucidated. However, to date there is little information on whether the pharmacokinetic profile following ingestion of a defined dose is influenced by the food matrix in which the isoflavone is given or by the processing method used. Three different foods (cookies, chocolate bars and juice) were prepared, and their isoflavone contents were determined. We compared the urinary and serum concentrations of daidzein, genistein and equol following the consumption of three different foods, each of which contained 50 mg of isoflavones. After the technological processing of the different test foods, differences in aglycone levels were observed. The plasma levels of the isoflavone precursor daidzein were not altered by food matrix. Urinary daidzein recovery was similar for all three foods ingested with total urinary output of 33-34% of ingested dose. Peak genistein concentrations were attained in serum earlier following consumption of a liquid matrix rather than a solid matrix, although there was a lower total urinary recovery of genistein following ingestion of juice than that of the two other foods. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.
Resumo:
An automatic algorithm is derived for constructing kernel density estimates based on a regression approach that directly optimizes generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. Local regularization is incorporated into the density construction process to further enforce sparsity. Examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample Parzen window density estimate.
Resumo:
Using the classical Parzen window (PW) estimate as the desired response, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density (SKD) estimates. The proposed algorithm incrementally minimises a leave-one-out test score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights of the selected sparse model are finally updated using the multiplicative nonnegative quadratic programming algorithm, which ensures the nonnegative and unity constraints for the kernel weights and has the desired ability to reduce the model size further. Except for the kernel width, the proposed method has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Several examples demonstrate the ability of this simple regression-based approach to effectively construct a SKID estimate with comparable accuracy to that of the full-sample optimised PW density estimate. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.
Resumo:
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing’s influential disquisition ‘computing machinery and intelligence’. Loosely based on Turing’s viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine’s capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum’s natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.
Resumo:
Objectives: Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Materials and Methods: Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Results: Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. Conclusions: We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with other reports and shows the limitations of serum MS profiling for biomarker discovery and as a diagnostic tool
Resumo:
This study investigated whether children’s fears could be un-learned using Rachman’s indirect pathways for learning fear. We hypothesised that positive information and modelling a non-anxious response are effective methods of un-learning fears acquired through verbal information. One hundred and seven children aged 6–8 years received negative information about one animal and no information about another. Fear beliefs and behavioural avoidance were measured. Children were randomised to receive positive verbal information, modelling, or a control task. Fear beliefs and behavioural avoidance were measured again. Positive information and modelling led to lower fear beliefs and behavioural avoidance than the control condition. Positive information was more effective than modelling in reducing fear beliefs and both methods significantly reduced behavioural avoidance. The results support Rachman’s indirect pathways as viable fear un-learning pathways and supports associative learning theories.
Resumo:
We evaluate a number of real estate sentiment indices to ascertain current and forward-looking information content that may be useful for forecasting demand and supply activities. Analyzing the dynamic relationships within a Vector Auto-Regression (VAR) framework and using the quarterly US data over 1988-2010, we test the efficacy of several sentiment measures by comparing them with other coincident economic indicators. Overall, our analysis suggests that the sentiment in real estate convey valuable information that can help predict changes in real estate returns. These findings have important implications for investment decisions, from consumers' as well as institutional investors' perspectives.
Resumo:
BACKGROUND: Using continuing professional development (CPD) as part of the revalidation of pharmacy professionals has been proposed in the UK but not implemented. We developed a CPD Outcomes Framework (‘the framework’) for scoring CPD records, where the score range was -100 to +150 based on demonstrable relevance and impact of the CPD on practice. OBJECTIVE: This exploratory study aimed to test the outcome of training people to use the framework, through distance-learning material (active intervention), by comparing CPD scores before and after training. SETTING: Pharmacy professionals were recruited in the UK in Reading, Banbury, Southampton, Kingston-upon-Thames and Guildford in 2009. METHOD: We conducted a randomised, double-blinded, parallel-group, before and after study. The control group simply received information on new CPD requirements through the post; the active intervention group also received the framework and associated training. Altogether 48 participants (25 control, 23 active) completed the study. All participants submitted CPD records to the research team before and after receiving the posted resources. The records (n=226) were scored blindly by the researchers using the framework. A subgroup of CPD records (n=96) submitted first (before-stage) and rewritten (after-stage) were analysed separately. MAIN OUTCOME MEASURE: Scores for CPD records received before and after distributing group-dependent material through the post. RESULTS: Using a linear-regression model both analyses found an increase in CPD scores in favour of the active intervention group. For the complete set of records, the effect was a mean difference of 9.9 (95% CI = 0.4 to 19.3), p-value = 0.04. For the subgroup of rewritten records, the effect was a mean difference of 17.3 (95% CI = 5.6 to 28.9), p-value = 0.0048. CONCLUSION: The intervention improved participants’ CPD behaviour. Training pharmacy professionals to use the framework resulted in better CPD activities and CPD records, potentially helpful for revalidation of pharmacy professionals. IMPACT: • Using a bespoke Continuing Professional Development outcomes framework improves the value of pharmacy professionals’ CPD activities and CPD records, with the potential to improve patient care. • The CPD outcomes framework could be helpful to pharmacy professionals internationally who want to improve the quality of their CPD activities and CPD records. • Regulators and officials across Europe and beyond can assess the suitability of the CPD outcomes framework for use in pharmacy CPD and revalidation in their own setting.