672 resultados para Clinical-prediction Rules
Resumo:
Recommender systems are widely used online to help users find other products, items etc that they may be interested in based on what is known about that user in their profile. Often however user profiles may be short on information and thus it is difficult for a recommender system to make quality recommendations. This problem is known as the cold-start problem. Here we investigate using association rules as a source of information to expand a user profile and thus avoid this problem. Our experiments show that it is possible to use association rules to noticeably improve the performance of a recommender system under the cold-start situation. Furthermore, we also show that the improvement in performance obtained can be achieved while using non-redundant rule sets. This shows that non-redundant rules do not cause a loss of information and are just as informative as a set of association rules that contain redundancy.
Resumo:
Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.
Resumo:
Prognostics and asset life prediction is one of research potentials in engineering asset health management. We previously developed the Explicit Hazard Model (EHM) to effectively and explicitly predict asset life using three types of information: population characteristics; condition indicators; and operating environment indicators. We have formerly studied the application of both the semi-parametric EHM and non-parametric EHM to the survival probability estimation in the reliability field. The survival time in these models is dependent not only upon the age of the asset monitored, but also upon the condition and operating environment information obtained. This paper is a further study of the semi-parametric and non-parametric EHMs to the hazard and residual life prediction of a set of resistance elements. The resistance elements were used as corrosion sensors for measuring the atmospheric corrosion rate in a laboratory experiment. In this paper, the estimated hazard of the resistance element using the semi-parametric EHM and the non-parametric EHM is compared to the traditional Weibull model and the Aalen Linear Regression Model (ALRM), respectively. Due to assuming a Weibull distribution in the baseline hazard of the semi-parametric EHM, the estimated hazard using this model is compared to the traditional Weibull model. The estimated hazard using the non-parametric EHM is compared to ALRM which is a well-known non-parametric covariate-based hazard model. At last, the predicted residual life of the resistance element using both EHMs is compared to the actual life data.
Resumo:
Frequently there is a disconnectedness, either perceived or actual, between theoretical principles and laboratory practice in science education and this holds true for clinical microbiology where traditionally knowledge is delivered in ‘chunks’ in a lecture format with the misguided belief that students have to know ‘everything about everything’. This preoccupation with content delivery often leaves no time for active class discussion or reflection. Moreover, laboratory classes are treated as add-ons to the process, rather than an integrated part of the whole learning experience. In redesigning our units (subjects) we have bridged the gap between the theory and practice of clinical bacteriology. In doing so, we have seen a transformation in the learning experiences of our students and in the way we teach.
Resumo:
Scoliosis is a spinal deformity that requires surgical correction in progressive cases. In order to optimize surgical outcomes, patient-specific finite element models are being developed by our group. In this paper, a single rod anterior correction procedure is simulated for a group of six scoliosis patients. For each patient, personalised model geometry was derived from low-dose CT scans, and clinically measured intra-operative corrective forces were applied. However, tissue material properties were not patient-specific, being derived from existing literature. Clinically, the patient group had a mean initial Cobb angle of 47.3 degrees, which was corrected to 17.5 degrees after surgery. The mean simulated post-operative Cobb angle for the group was 18.1 degrees. Although this represents good agreement between clinical and simulated corrections, the discrepancy between clinical and simulated Cobb angle for individual patients varied between -10.3 and +8.6 degrees, with only three of the six patients matching the clinical result to within accepted Cobb measurement error of +-5 degrees. The results of this study suggest that spinal tissue material properties play an important role in governing the correction obtained during surgery, and that patient-specific modelling approaches must address the question of how to prescribe patient-specific soft tissue properties for spine surgery simulation.
Resumo:
The increasing popularity of motorcycles in Australia is a significant concern as motorcycle riders represent 15% of all road fatalities and an even greater proportion of serious injuries. This study assessed the psychosocial factors influencing motorcycle riders’ intentions to perform both safe and risky riding behaviours. Using an extended theory of planned behaviour (TPB), motorcycle riders (n = 229) from Queensland, Australia were surveyed to assess their riding attitudes, subjective norm (general and specific), perceived behavioural control (PBC), group norm, self-identity, sensation seeking, and aggression, as well as their intentions, in relation to three safe (e.g., handle my motorcycle skilfully) and three risky (e.g., bend road rules to get through traffic) riding behaviours. Although there was variability in the predictors of intention across the behaviours, results revealed that safer rider intentions were most consistently predicted by PBC, while riskier intentions were predicted by attitudes and sensation seeking. The TPB was able to explain a greater proportion of the variance for intentions to perform risky behaviours. Overall, this study has provided insight into the complexity of factors contributing to rider intentions and suggests that different practical strategies need to be adopted to facilitate safer and reduce risky rider decisions.
Resumo:
Developing safe and sustainable road systems is a common goal in all countries. Applications to assist with road asset management and crash minimization are sought universally. This paper presents a data mining methodology using decision trees for modeling the crash proneness of road segments using available road and crash attributes. The models quantify the concept of crash proneness and demonstrate that road segments with only a few crashes have more in common with non-crash roads than roads with higher crash counts. This paper also examines ways of dealing with highly unbalanced data sets encountered in the study.
Resumo:
Association rule mining has contributed to many advances in the area of knowledge discovery. However, the quality of the discovered association rules is a big concern and has drawn more and more attention recently. One problem with the quality of the discovered association rules is the huge size of the extracted rule set. Often for a dataset, a huge number of rules can be extracted, but many of them can be redundant to other rules and thus useless in practice. Mining non-redundant rules is a promising approach to solve this problem. In this paper, we first propose a definition for redundancy, then propose a concise representation, called a Reliable basis, for representing non-redundant association rules. The Reliable basis contains a set of non-redundant rules which are derived using frequent closed itemsets and their generators instead of using frequent itemsets that are usually used by traditional association rule mining approaches. An important contribution of this paper is that we propose to use the certainty factor as the criterion to measure the strength of the discovered association rules. Using this criterion, we can ensure the elimination of as many redundant rules as possible without reducing the inference capacity of the remaining extracted non-redundant rules. We prove that the redundancy elimination, based on the proposed Reliable basis, does not reduce the strength of belief in the extracted rules. We also prove that all association rules, their supports and confidences, can be retrieved from the Reliable basis without accessing the dataset. Therefore the Reliable basis is a lossless representation of association rules. Experimental results show that the proposed Reliable basis can significantly reduce the number of extracted rules. We also conduct experiments on the application of association rules to the area of product recommendation. The experimental results show that the non-redundant association rules extracted using the proposed method retain the same inference capacity as the entire rule set. This result indicates that using non-redundant rules only is sufficient to solve real problems needless using the entire rule set.
Resumo:
Background: Waist circumference has been identified as a valuable predictor of cardiovascular risk in children. The development of waist circumference percentiles and cut-offs for various ethnic groups are necessary because of differences in body composition. The purpose of this study was to develop waist circumference percentiles for Chinese children and to explore optimal waist circumference cut-off values for predicting cardiovascular risk factors clustering in this population.----- ----- Methods: Height, weight, and waist circumference were measured in 5529 children (2830 boys and 2699 girls) aged 6-12 years randomly selected from southern and northern China. Blood pressure, fasting triglycerides, low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, and glucose were obtained in a subsample (n = 1845). Smoothed percentile curves were produced using the LMS method. Receiver-operating characteristic analysis was used to derive the optimal age- and gender-specific waist circumference thresholds for predicting the clustering of cardiovascular risk factors.----- ----- Results: Gender-specific waist circumference percentiles were constructed. The waist circumference thresholds were at the 90th and 84th percentiles for Chinese boys and girls respectively, with sensitivity and specificity ranging from 67% to 83%. The odds ratio of a clustering of cardiovascular risk factors among boys and girls with a higher value than cut-off points was 10.349 (95% confidence interval 4.466 to 23.979) and 8.084 (95% confidence interval 3.147 to 20.767) compared with their counterparts.----- ----- Conclusions: Percentile curves for waist circumference of Chinese children are provided. The cut-off point for waist circumference to predict cardiovascular risk factors clustering is at the 90th and 84th percentiles for Chinese boys and girls, respectively.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.
Resumo:
Different from conventional methods for structural reliability evaluation, such as, first/second-order reliability methods (FORM/SORM) or Monte Carlo simulation based on corresponding limit state functions, a novel approach based on dynamic objective oriented Bayesian network (DOOBN) for prediction of structural reliability of a steel bridge element has been proposed in this paper. The DOOBN approach can effectively model the deterioration processes of a steel bridge element and predict their structural reliability over time. This approach is also able to achieve Bayesian updating with observed information from measurements, monitoring and visual inspection. Moreover, the computational capacity embedded in the approach can be used to facilitate integrated management and maintenance optimization in a bridge system. A steel bridge girder is used to validate the proposed approach. The predicted results are compared with those evaluated by FORM method.
Resumo:
A model to predict the buildup of mainly traffic-generated volatile organic compounds or VOCs (toluene, ethylbenzene, ortho-xylene, meta-xylene, and para-xylene) on urban road surfaces is presented. The model required three traffic parameters, namely average daily traffic (ADT), volume to capacity ratio (V/C), and surface texture depth (STD), and two chemical parameters, namely total suspended solid (TSS) and total organic carbon (TOC), as predictor variables. Principal component analysis and two phase factor analysis were performed to characterize the model calibration parameters. Traffic congestion was found to be the underlying cause of traffic-related VOC buildup on urban roads. The model calibration was optimized using orthogonal experimental design. Partial least squares regression was used for model prediction. It was found that a better optimized orthogonal design could be achieved by including the latent factors of the data matrix into the design. The model performed fairly accurately for three different land uses as well as five different particle size fractions. The relative prediction errors were 10–40% for the different size fractions and 28–40% for the different land uses while the coefficients of variation of the predicted intersite VOC concentrations were in the range of 25–45% for the different size fractions. Considering the sizes of the data matrices, these coefficients of variation were within the acceptable interlaboratory range for analytes at ppb concentration levels.
Resumo:
Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.