988 resultados para Loss labeling (classification)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Driver Behaviour Questionnaire (DBQ) continues to be the most widely utilised self-report scale globally to assess crash risk and aberrant driving behaviours among motorists. However, the scale also attracts criticism regarding its perceived limited ability to accurately identify those most at risk of crash involvement. This study reports on the utilisation of the DBQ to examine the self-reported driving behaviours (and crash outcomes) of drivers in three separate Australian fleet samples (N = 443, N = 3414, & N = 4792), and whether combining the samples increases the tool’s predictive ability. Either on-line or paper versions of the questionnaire were completed by fleet employees in three organisations. Factor analytic techniques identified either three or four factor solutions (in each of the separate studies) and the combined sample produced expected factors of: (a) errors, (b) highway-code violations and (c) aggressive driving violations. Highway code violations (and mean scores) were comparable across the studies. However, across the three samples, multivariate analyses revealed that exposure to the road was the best predictor of crash involvement at work, rather than DBQ constructs. Furthermore, combining the scores to produce a sample of 8649 drivers did not improve the predictive ability of the tool for identifying crashes (e.g., 0.4% correctly identified) or for demerit point loss (0.3%). The paper outlines the major findings of this comparative sample study in regards to utilising self-report measurement tools to identify “at risk” drivers as well as the application of such data to future research endeavours.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determination of sequence similarity is a central issue in computational biology, a problem addressed primarily through BLAST, an alignment based heuristic which has underpinned much of the analysis and annotation of the genomic era. Despite their success, alignment-based approaches scale poorly with increasing data set size, and are not robust under structural sequence rearrangements. Successive waves of innovation in sequencing technologies – so-called Next Generation Sequencing (NGS) approaches – have led to an explosion in data availability, challenging existing methods and motivating novel approaches to sequence representation and similarity scoring, including adaptation of existing methods from other domains such as information retrieval. In this work, we investigate locality-sensitive hashing of sequences through binary document signatures, applying the method to a bacterial protein classification task. Here, the goal is to predict the gene family to which a given query protein belongs. Experiments carried out on a pair of small but biologically realistic datasets (the full protein repertoires of families of Chlamydia and Staphylococcus aureus genomes respectively) show that a measure of similarity obtained by locality sensitive hashing gives highly accurate results while offering a number of avenues which will lead to substantial performance improvements over BLAST..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines the role that the size of a victimised organisation and the size of the victim’s loss have on attitudes regarding the acceptance or unacceptance of 12 questionable consumer actions. A sample of 815 American adults rated each scenario on a scale anchored by very acceptable and very unacceptable. It was shown that the size of the victimised organisation tends to influence consumers’ opinions with more disdain directed towards consumers who take advantage of smaller businesses. Similarly, the respondents tended to be more critical of these actions when the loss incurred by the victimised organisation was large. A 2x2 matrix concurrently delineated the nature of the extent to which opinions regarding the 12 actions differed depending upon the mediating variable under scrutiny.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most surgeons cement the tibial component in total knee replacement surgery. Mid-term registry data from a number of countries, including those from the United Kingdom and Australia, support the excellent survivorship of cemented tibial components. In spite of this success, results can always be improved, and cementing technique can play a role. Cementing technique on the tibia is not standardized, and surgeons still differ about the best ways to deliver cement into the cancellous bone of the upper tibia. Questions remain regarding whether to use a gun or a syringe to inject the cement into the cancellous bone of the tibial plateau . The ideal cement penetration into the tibial plateau is debated, though most reports suggest that 4 mm to 10 mm is ideal. Thicker mantles are thought to be dangerous due to the risk of bone necrosis, but there is little in the literature to support this contention...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose Endometrial adenocarcinoma (EC) is the most common gynaecologic cancer. Up to 90% of EC patients are obese which poses a health threat to patients post-treatment. Standard treatment for EC includes hysterectomy, although this has significant side effects for obese women at high risk of surgical complications and for women of childbearing age. This trial investigates the effectiveness of non-surgical or conservative treatment options for obese women with early stage EC. The primary aim is to determine the efficacy of: levonorgestrel intrauterine device (LNG-IUD); with or without metformin (an antidiabetic drug); and with or without a weight loss intervention to achieve a pathological complete response (pCR) in EC at six months from study treatment initiation. The secondary aim is to enhance understanding of the molecular processes and to predict a treatment response by investigating EC biomarkers. Methods An open label, three-armed, randomised, phase-II, multi-centre trial of LNG-IUD ± metformin ± weight loss intervention. 165 participants from 28 centres are randomly assigned in a 3:3:5 ratio to the treatment arms. Clinical, quality of life and health behavioural data will be collected at baseline, six weeks, three and six months. EC biomarkers will be assessed at baseline, three and six months. Conclusions There is limited prospective evidence for conservative treatment for EC. Trial results could benefit patients and reduce health system costs through a reduction in hospitalisations and through lower incidence of adverse events currently observed with standard treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fine-grained leaf classification has concentrated on the use of traditional shape and statistical features to classify ideal images. In this paper we evaluate the effectiveness of traditional hand-crafted features and propose the use of deep convolutional neural network (ConvNet) features. We introduce a range of condition variations to explore the robustness of these features, including: translation, scaling, rotation, shading and occlusion. Evaluations on the Flavia dataset demonstrate that in ideal imaging conditions, combining traditional and ConvNet features yields state-of-theart performance with an average accuracy of 97:3%�0:6% compared to traditional features which obtain an average accuracy of 91:2%�1:6%. Further experiments show that this combined classification approach consistently outperforms the best set of traditional features by an average of 5:7% for all of the evaluated condition variations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.