22 resultados para computer prediction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes an ancillary project to the Early Diagnosis of Mesothelioma and Lung Cancer in Prior Asbestos Workers study and was conducted to determine the effects of asbestos exposure, pulmonary function and cigarette smoking in the prediction of pulmonary fibrosis. 613 workers who were occupationally exposed to asbestos for an average of 25.9 (SD=14.69) years were sampled from Sarnia, Ontario. A structured questionnaire was administered during a face-to-face interview along with a low-dose computed tomography (LDCT) of the thorax. Of them, 65 workers (10.7%, 95%CI 8.12—12.24) had LDCT-detected pulmonary fibrosis. The model predicting fibrosis included the variables age, smoking (dichotomized), post FVC % splines and post- FEV1% splines. This model had a receiver operator characteristic area under the curve of 0.738. The calibration of the model was evaluated with R statistical program and the bootstrap optimism-corrected calibration slope was 0.692. Thus, our model demonstrated moderate predictive performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Please consult the paper edition of this thesis to read. It is available on the 5th Floor of the Library at Call Number: Z 9999 E38 D56 1992

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While the influence of computer technology has been widely studied in a variety of contexts, the drawing teaching studio is a particularly interesting context because of the juxtaposition of traditional medium and computer technology. For this study, 5 Canadian postsecondary teachers engaged in a 2-round Delphi interview process to discuss their responses to computer technology on their drawing pedagogy. Data sources included transcribed interviews. Findings indicated that artist teachers are both cautious to embrace and curious to explore appropriate use of computer technology on their drawing pedagogy. Artist teachers are both critical and optimistic about the influence of computer technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Later-born siblings of children with autism spectrum disorder (ASD) are considered at biological risk for ASD and the broader autism phenotype. Early screening may detect early signs of ASD and facilitate intervention as soon as possible. This follow-up study revisits and re-examines a second-degree autism screener for children at biological risk of autism, the Parent Observation Early Markers Scale (POEMS, Feldman et al., 2012). Using available follow-up information, 110 children (the original 108 infants plus 2 infants recruited after the completion of the original study) were divided into three groups: diagnosed group (n = 13), lost diagnosis group (n = 5), and undiagnosed group (n = 92). The POEMS continued to show acceptable predictive validity. The POEMS total scores and mean number of elevated items were significantly higher in the diagnosed group than the undiagnosed group. The lost diagnosis group did not differ from the undiagnosed group on POEMS total scores and elevated items at any age, but the lost diagnosis group had significantly lower total scores and number of elevated items than the diagnosed group starting at 18 months. Both ASD core and subsidiary behaviours differentiated the diagnosed and undiagnosed groups from 9−36 months of age. Using 70 as a cut-off score, sensitivity, specificity, and positive predictive value (PPV) were .69, .84, and .38, respectively. The study provides further evidence that the POEMS may serve as a low-cost early screener for ASD in at risk children and pinpoint specific developmental and behavioural problems that may be amenable to very early intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a result of mutation in genes, which is a simple change in our DNA, we will have undesirable phenotypes which are known as genetic diseases or disorders. These small changes, which happen frequently, can have extreme results. Understanding and identifying these changes and associating these mutated genes with genetic diseases can play an important role in our health, by making us able to find better diagnosis and therapeutic strategies for these genetic diseases. As a result of years of experiments, there is a vast amount of data regarding human genome and different genetic diseases that they still need to be processed properly to extract useful information. This work is an effort to analyze some useful datasets and to apply different techniques to associate genes with genetic diseases. Two genetic diseases were studied here: Parkinson’s disease and breast cancer. Using genetic programming, we analyzed the complex network around known disease genes of the aforementioned diseases, and based on that we generated a ranking for genes, based on their relevance to these diseases. In order to generate these rankings, centrality measures of all nodes in the complex network surrounding the known disease genes of the given genetic disease were calculated. Using genetic programming, all the nodes were assigned scores based on the similarity of their centrality measures to those of the known disease genes. Obtained results showed that this method is successful at finding these patterns in centrality measures and the highly ranked genes are worthy as good candidate disease genes for being studied. Using standard benchmark tests, we tested our approach against ENDEAVOUR and CIPHER - two well known disease gene ranking frameworks - and we obtained comparable results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.