912 resultados para validation tests of PTO


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Prostate cancer (PCa) is a very heterogeneous disease with respect to clinical outcome. This study explored differential DNA methylation in a priori selected genes to diagnose PCa and predict clinical failure (CF) in high-risk patients. METHODS A quantitative multiplex, methylation-specific PCR assay was developed to assess promoter methylation of the APC, CCND2, GSTP1, PTGS2 and RARB genes in formalin-fixed, paraffin-embedded tissue samples from 42 patients with benign prostatic hyperplasia and radical prostatectomy specimens of patients with high-risk PCa, encompassing training and validation cohorts of 147 and 71 patients, respectively. Log-rank tests, univariate and multivariate Cox models were used to investigate the prognostic value of the DNA methylation. RESULTS Hypermethylation of APC, CCND2, GSTP1, PTGS2 and RARB was highly cancer-specific. However, only GSTP1 methylation was significantly associated with CF in both independent high-risk PCa cohorts. Importantly, trichotomization into low, moderate and high GSTP1 methylation level subgroups was highly predictive for CF. Patients with either a low or high GSTP1 methylation level, as compared to the moderate methylation groups, were at a higher risk for CF in both the training (Hazard ratio [HR], 3.65; 95% CI, 1.65 to 8.07) and validation sets (HR, 4.27; 95% CI, 1.03 to 17.72) as well as in the combined cohort (HR, 2.74; 95% CI, 1.42 to 5.27) in multivariate analysis. CONCLUSIONS Classification of primary high-risk tumors into three subtypes based on DNA methylation can be combined with clinico-pathological parameters for a more informative risk-stratification of these PCa patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sizes and power of selected two-sample tests of the equality of survival distributions are compared by simulation for small samples from unequally, randomly-censored exponential distributions. The tests investigated include parametric tests (F, Score, Likelihood, Asymptotic), logrank tests (Mantel, Peto-Peto), and Wilcoxon-Type tests (Gehan, Prentice). Equal sized samples, n = 18, 16, 32 with 1000 (size) and 500 (power) simulation trials, are compared for 16 combinations of the censoring proportions 0%, 20%, 40%, and 60%. For n = 8 and 16, the Asymptotic, Peto-Peto, and Wilcoxon tests perform at nominal 5% size expectations, but the F, Score and Mantel tests exceeded 5% size confidence limits for 1/3 of the censoring combinations. For n = 32, all tests showed proper size, with the Peto-Peto test most conservative in the presence of unequal censoring. Powers of all tests are compared for exponential hazard ratios of 1.4 and 2.0. There is little difference in power characteristics of the tests within the classes of tests considered. The Mantel test showed 90% to 95% power efficiency relative to parametric tests. Wilcoxon-type tests have the lowest relative power but are robust to differential censoring patterns. A modified Peto-Peto test shows power comparable to the Mantel test. For n = 32, a specific Weibull-exponential comparison of crossing survival curves suggests that the relative powers of logrank and Wilcoxon-type tests are dependent on the scale parameter of the Weibull distribution. Wilcoxon-type tests appear more powerful than logrank tests in the case of late-crossing and less powerful for early-crossing survival curves. Guidelines for the appropriate selection of two-sample tests are given. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article investigates experimentally the application of health monitoring techniques to assess the damage on a particular kind of hysteretic (metallic) damper called web plastifying dampers, which are subjected to cyclic loading. In general terms, hysteretic dampers are increasingly used as passive control systems in advanced earthquake-resistant structures. Nonparametric statistical processing of the signals obtained from simple vibration tests of the web plastifying damper is used here to propose an area index damage. This area index damage is compared with an alternative energy-based index of damage proposed in past research that is based on the decomposition of the load?displacement curve experienced by the damper. Index of damage has been proven to accurately predict the level of damage and the proximity to failure of web plastifying damper, but obtaining the load?displacement curve for its direct calculation requires the use of costly instrumentation. For this reason, the aim of this study is to estimate index of damage indirectly from simple vibration tests, calling for much simpler and cheaper instrumentation, through an auxiliary index called area index damage. Web plastifying damper is a particular type of hysteretic damper that uses the out-of-plane plastic deformation of the web of I-section steel segments as a source of energy dissipation. Four I-section steel segments with similar geometry were subjected to the same pattern of cyclic loading, and the damage was evaluated with the index of damage and area index damage indexes at several stages of the loading process. A good correlation was found between area index damage and index of damage. Based on this correlation, simple formulae are proposed to estimate index of damage from the area index damage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports extensive tests of empirical equations developed by different authors for harbour breakwater overtopping. First, the existing equations are compiled and evaluated as tools for estimating the overtopping rates on sloping and vertical breakwaters. These equations are then tested using the data obtained in a number of laboratory studies performed in the Centre for Harbours and Coastal Studies of the CEDEX, Spain. It was found that the recommended application ranges of the empirical equations typically deviate from those revealed in the experimental tests. In addition, a neural network model developed within the European CLASH Project is tested. The wind effects on overtopping are also assessed using a reduced scale physical model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social identity theory offers an important lens to improve understanding of founders as enterprising individuals, the venture creation process, and its outcomes. Yet, further advances are hindered by the lack of valid scales to measure founders’ social identities. Drawing on social identity theory and a systematic classification of founders’ social identities (Darwinians, Communitarians, and Missionaries), we develop and test a corresponding 15-item scale in the Alpine region and validate it in 13 additional countries and regions. The scale allows identifying founders’ social identities and relating them to processes and outcomes in entrepreneurship. The scale is available online in 15 languages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

cont. VI. The application of standard measurements to school administration. [By] D.C. Bliss. VII. A half-year's progress in the achievement of one school system. A. The progress as measured by the Thorndike visual vocabulary test. B. The progress as measured by the Courtis tests, series B. [By] H.G. Childs. VIII. Courtis tests in arithmetic: value to superintendents and teacher. [By] S.A. Courtis. IX. Use of standard tests at Salt Lake City, Utah. [By] E. P. Cubberley. X. Reading. [By] C.H. Judd. XI. Studies by the Bureau of research and efficiency of Kansas City, Mo. [By] George Melcher. XII. The effects of efficiency tests in reading on a city school system. [By] E.E. Oberholtzer. XIII. Investigation of spelling in the schools of Oakland, Cal. [By] J.B. Sears. XIV. Standard tests as aids in the classification and promotion of pupils. [By] Daniel Starch. XV. The use of mental tests in the school. [By] G.M. Whipple.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Approximately half of current contact lens wearers suffer from dryness and discomfort, particularly towards the end of the day. Contact lens practitioners have a number of dry eye tests available to help them to predict which of their patients may be at risk of contact lens drop out and advise them accordingly. This thesis set out to rationalize them to see if any are of more diagnostic significance than others. This doctorate has found: (1) The Keratograph, a device which permits an automated, examiner independent technique for measuring non invasive tear break up time (NITBUT) measured NITBUT consistently shorter than measurements recorded with the Tearscope. When measuring central corneal curvature the spherical equivalent power of the cornea was measured as being significantly flatter than with a validated automated keratometer. (2) Non-invasive and invasive tear break-up times significantly correlated to each other, but not the other tear metrics. Symptomology, assessed using the OSDI questionnaire, correlated more with those tests indicating possible damage to the ocular surface (including LWE, LIPCOF and conjunctival staining) than with tests of either tear volume or stability. Cluster analysis showed some statistically significant groups of patients with different sign and symptom profiles. The largest cluster demonstrated poor tear quality with both non-invasive and invasive tests, low tear volume and more symptoms. (3) Care should be taken in fitting patients new to contact lenses if they have a NITBUT less than 10s or an OSDI comfort rating greater than 4.2 as they are more likely to drop-out within the first 6 months. Cluster analysis was not found to be beneficial in predicting which patients will succeed with lenses and which will not. A combination of the OSDI questionnaire and a NITBUT measurement was most useful both in diagnosing dry eye and in predicting contact lens drop out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sizeable amount of the testing in eye care, requires either the identification of targets such as letters to assess functional vision, or the subjective evaluation of imagery by an examiner. Computers can render a variety of different targets on their monitors and can be used to store and analyse ophthalmic images. However, existing computing hardware tends to be large, screen resolutions are often too low, and objective assessments of ophthalmic images unreliable. Recent advances in mobile computing hardware and computer-vision systems can be used to enhance clinical testing in optometry. High resolution touch screens embedded in mobile devices, can render targets at a wide variety of distances and can be used to record and respond to patient responses, automating testing methods. This has opened up new opportunities in computerised near vision testing. Equally, new image processing techniques can be used to increase the validity and reliability of objective computer vision systems. Three novel apps for assessing reading speed, contrast sensitivity and amplitude of accommodation were created by the author to demonstrate the potential of mobile computing to enhance clinical measurement. The reading speed app could present sentences effectively, control illumination and automate the testing procedure for reading speed assessment. Meanwhile the contrast sensitivity app made use of a bit stealing technique and swept frequency target, to rapidly assess a patient’s full contrast sensitivity function at both near and far distances. Finally, customised electronic hardware was created and interfaced to an app on a smartphone device to allow free space amplitude of accommodation measurement. A new geometrical model of the tear film and a ray tracing simulation of a Placido disc topographer were produced to provide insights on the effect of tear film breakdown on ophthalmic images. Furthermore, a new computer vision system, that used a novel eye-lash segmentation technique, was created to demonstrate the potential of computer vision systems for the clinical assessment of tear stability. Studies undertaken by the author to assess the validity and repeatability of the novel apps, found that their repeatability was comparable to, or better, than existing clinical methods for reading speed and contrast sensitivity assessment. Furthermore, the apps offered reduced examination times in comparison to their paper based equivalents. The reading speed and amplitude of accommodation apps correlated highly with existing methods of assessment supporting their validity. Their still remains questions over the validity of using a swept frequency sine-wave target to assess patient’s contrast sensitivity functions as no clinical test provides the range of spatial frequencies and contrasts, nor equivalent assessment at distance and near. A validation study of the new computer vision system found that the authors tear metric correlated better with existing subjective measures of tear film stability than those of a competing computer-vision system. However, repeatability was poor in comparison to the subjective measures due to eye lash interference. The new mobile apps, computer vision system, and studies outlined in this thesis provide further insight into the potential of applying mobile and image processing technology to enhance clinical testing by eye care professionals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency's safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The article presents a study of a CEFR B2-level reading subtest that is part of the Slovenian national secondary school leaving examination in English as a foreign language, and compares the test-taker actual performance (objective difficulty) with the test-taker and expert perceptions of item difficulty (subjective difficulty). The study also analyses the test-takers’ comments on item difficulty obtained from a while-reading questionnaire. The results are discussed in the framework of the existing research in the fields of (the assessment of) reading comprehension, and are addressed with regard to their implications for item-writing, FL teaching and curriculum development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patterns of cognitive change over micro-longitudinal timescales (i.e., ranging from hours to days) are associated with a wide range of age-related health and functional outcomes. However, practical issues with conducting high-frequency assessments make investigations of micro-longitudinal cognition costly and burdensome to run. One way of addressing this is to develop cognitive assessments that can be performed by older adults, in their own homes, without a researcher being present. Here, we address the question of whether reliable and valid cognitive data can be collected over micro-longitudinal timescales using unsupervised cognitive tests.In study 1, 48 older adults completed two touchscreen cognitive tests, on three occasions, in controlled conditions, alongside a battery of standard tests of cognitive functions. In study 2, 40 older adults completed the same two computerized tasks on multiple occasions, over three separate week-long periods, in their own homes, without a researcher present. Here, the tasks were incorporated into a wider touchscreen system (Novel Assessment of Nutrition and Ageing (NANA)) developed to assess multiple domains of health and behavior. Standard tests of cognitive function were also administered prior to participants using the NANA system.Performance on the two “NANA” cognitive tasks showed convergent validity with, and similar levels of reliability to, the standard cognitive battery in both studies. Completion and accuracy rates were also very high. These results show that reliable and valid cognitive data can be collected from older adults using unsupervised computerized tests, thus affording new opportunities for the investigation of cognitive function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La possibilité d’estimer l’impact du changement climatique en cours sur le comportement hydrologique des hydro-systèmes est une nécessité pour anticiper les adaptations inévitables et nécessaires que doivent envisager nos sociétés. Dans ce contexte, ce projet doctoral présente une étude sur l’évaluation de la sensibilité des projections hydrologiques futures à : (i) La non-robustesse de l’identification des paramètres des modèles hydrologiques, (ii) l’utilisation de plusieurs jeux de paramètres équifinaux et (iii) l’utilisation de différentes structures de modèles hydrologiques. Pour quantifier l’impact de la première source d’incertitude sur les sorties des modèles, quatre sous-périodes climatiquement contrastées sont tout d’abord identifiées au sein des chroniques observées. Les modèles sont calés sur chacune de ces quatre périodes et les sorties engendrées sont analysées en calage et en validation en suivant les quatre configurations du Different Splitsample Tests (Klemeš, 1986;Wilby, 2005; Seiller et al. (2012);Refsgaard et al. (2014)). Afin d’étudier la seconde source d’incertitude liée à la structure du modèle, l’équifinalité des jeux de paramètres est ensuite prise en compte en considérant pour chaque type de calage les sorties associées à des jeux de paramètres équifinaux. Enfin, pour évaluer la troisième source d’incertitude, cinq modèles hydrologiques de différents niveaux de complexité sont appliqués (GR4J, MORDOR, HSAMI, SWAT et HYDROTEL) sur le bassin versant québécois de la rivière Au Saumon. Les trois sources d’incertitude sont évaluées à la fois dans conditions climatiques observées passées et dans les conditions climatiques futures. Les résultats montrent que, en tenant compte de la méthode d’évaluation suivie dans ce doctorat, l’utilisation de différents niveaux de complexité des modèles hydrologiques est la principale source de variabilité dans les projections de débits dans des conditions climatiques futures. Ceci est suivi par le manque de robustesse de l’identification des paramètres. Les projections hydrologiques générées par un ensemble de jeux de paramètres équifinaux sont proches de celles associées au jeu de paramètres optimal. Par conséquent, plus d’efforts devraient être investis dans l’amélioration de la robustesse des modèles pour les études d’impact sur le changement climatique, notamment en développant les structures des modèles plus appropriés et en proposant des procédures de calage qui augmentent leur robustesse. Ces travaux permettent d’apporter une réponse détaillée sur notre capacité à réaliser un diagnostic des impacts des changements climatiques sur les ressources hydriques du bassin Au Saumon et de proposer une démarche méthodologique originale d’analyse pouvant être directement appliquée ou adaptée à d’autres contextes hydro-climatiques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper was to obtain evidence of the validity of the LSB-50 (de Rivera & Abuín, 2012), a screening measure of psychopathology, in Argentinean adolescents. The sample consisted of 1002 individuals (49.7% male; 50.3% female) between 12 and 18 years-old (M = 14.98; SD = 1.99). A cross-validation study and factorial invariance studies were performed in samples divided by sex and age to test if a seven-factor structure that corresponds to seven clinical scales (Hypersensitivity, Obsessive-Compulsive, Anxiety, Hostility, Somatization, Depression, and Sleep disturbance) was adequate for the LSB-50. The seven-factor structure proved to be suitable for all the subsamples. Next, the fit of the seven-factor structure was studied simultaneously? in the aforementioned subsamples through hierarchical models that imposed different constrains of equivalency?. Results indicated the invariance of the seven clinical dimensions of the LSB-50. Ordinal alphas showed good internal consistency for all the scales. Finally, the correlations with a diagnostic measure of psychopathology (PAI-A) indicated moderate convergence. It is concluded that the analyses performed provide robust evidence of construct validity for the LSB-50

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency’s safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.