7 resultados para level sets

em Deakin Research Online - Australia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The thickness of the retinal nerve fiber layer (RFNL) has become a diagnose measure for glaucoma assessment. To measure this thickness, accurate segmentation of the RFNL in optical coherence tomography (OCT) images is essential. Identification of a suitable segmentation algorithm will facilitate the enhancement of the RNFL thickness measurement accuracy. This paper investigates the performance of six algorithms in the segmentation of RNFL in OCT images. The algorithms are: normalised cuts, region growing, k-means clustering, active contour, level sets segmentation: Piecewise Gaussian Method (PGM) and Kernelized Method (KM). The performance of the six algorithms are determined through a set of experiments on OCT retinal images. An experimental procedure is used to measure the performance of the tested algorithms. The measured segmentation precision-recall results of the six algorithms are compared and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Men who were part of an Australian petroleum industry cohort had previously been found to have an excess of lympho-hematopoietic cancer. Occupational benzene exposure is a possible cause of this excess.

Methods: We conducted a case-control study of lympho-hematopoietic cancer nested within the existing cohort study to examine the role of benzene exposure. Cases identified between 1981 and 1999 (N = 79) were age-matched to 5 control subjects from the cohort. We estimated each subject's benzene exposure using occupational histories, local site-specific information, and an algorithm using Australian petroleum industry monitoring data.

Results: Matched analyses showed that the risk of leukemia was increased at cumulative exposures above 2 ppm-years and with intensity of exposure of highest exposed job over 0.8 ppm. Risk increased with higher exposures; for the 13 case-sets with greater than 8 ppm-years cumulative exposure, the odds ratio was 11.3 (95% confidence interval = 2.85-45.1). The risk of leukemia was not associated with start date or duration of employment. The association with type of workplace was explained by cumulative exposure. There is limited evidence that short-term high exposures carry more risk than the same amount of exposure spread over a longer period. The risks for acute nonlymphocytic leukemia and chronic lymphocytic leukemia were raised for the highest exposed workers. No association was found between non-Hodgkin lymphoma or multiple myeloma and benzene exposure, nor between tobacco or alcohol consumption and any of the cancers.

Conclusions: We found an excess risk of leukemia associated with cumulative benzene exposures and benzene exposure intensities that were considerably lower than reported in previous studies. No evidence was found of a threshold cumulative exposure below which there was no risk.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selecting a suitable proximity measure is one of the fundamental tasks in clustering. How to effectively utilize all available side information, including the instance level information in the form of pair-wise constraints, and the attribute level information in the form of attribute order preferences, is an essential problem in metric learning. In this paper, we propose a learning framework in which both the pair-wise constraints and the attribute order preferences can be incorporated simultaneously. The theory behind it and the related parameter adjusting technique have been described in details. Experimental results on benchmark data sets demonstrate the effectiveness of proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Static detection of polymorphic malware variants plays an important role to improve system security. Control flow has shown to be an effective characteristic that represents polymorphic malware instances. In our research, we propose a similarity search of malware using novel distance metrics of malware signatures. We describe a malware signature by the set of control flow graphs the malware contains. We propose two approaches and use the first to perform pre-filtering. Firstly, we use a distance metric based on the distance between feature vectors. The feature vector is a decomposition of the set of graphs into either fixed size k-sub graphs, or q-gram strings of the high-level source after decompilation. We also propose a more effective but less computationally efficient distance metric based on the minimum matching distance. The minimum matching distance uses the string edit distances between programs' decompiled flow graphs, and the linear sum assignment problem to construct a minimum sum weight matching between two sets of graphs. We implement the distance metrics in a complete malware variant detection system. The evaluation shows that our approach is highly effective in terms of a limited false positive rate and our system detects more malware variants when compared to the detection rates of other algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is devoted to empirical investigation of novel multi-level ensemble meta classifiers for the detection and monitoring of progression of cardiac autonomic neuropathy, CAN, in diabetes patients. Our experiments relied on an extensive database and concentrated on ensembles of ensembles, or multi-level meta classifiers, for the classification of cardiac autonomic neuropathy progression. First, we carried out a thorough investigation comparing the performance of various base classifiers for several known sets of the most essential features in this database and determined that Random Forest significantly and consistently outperforms all other base classifiers in this new application. Second, we used feature selection and ranking implemented in Random Forest. It was able to identify a new set of features, which has turned out better than all other sets considered for this large and well-known database previously. Random Forest remained the very best classier for the new set of features too. Third, we investigated meta classifiers and new multi-level meta classifiers based on Random Forest, which have improved its performance. The results obtained show that novel multi-level meta classifiers achieved further improvement and obtained new outcomes that are significantly better compared with the outcomes published in the literature previously for cardiac autonomic neuropathy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extracting knowledge from the transaction records and the personal data of credit card holders has great profit potential for the banking industry. The challenge is to detect/predict bankrupts and to keep and recruit the profitable customers. However, grouping and targeting credit card customers by traditional data-driven mining often does not directly meet the needs of the banking industry, because data-driven mining automatically generates classification outputs that are imprecise, meaningless, and beyond users' control. In this paper, we provide a novel domain-driven classification method that takes advantage of multiple criteria and multiple constraint-level programming for intelligent credit scoring. The method involves credit scoring to produce a set of customers' scores that allows the classification results actionable and controllable by human interaction during the scoring process. Domain knowledge and experts' experience parameters are built into the criteria and constraint functions of mathematical programming and the human and machine conversation is employed to generate an efficient and precise solution. Experiments based on various data sets validated the effectiveness and efficiency of the proposed methods. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our aim is to estimate the perspective-effected geometric distortion of a scene from a video feed. In contrast to most related previous work, in this task we are constrained to use low-level spatiotemporally local motion features only. This particular challenge arises in many semiautomatic surveillance systems that alert a human operator to potential abnormalities in the scene. Low-level spatiotemporally local motion features are sparse (and thus require comparatively little storage space) and sufficiently powerful in the context of video abnormality detection to reduce the need for human intervention by more than 100-fold. This paper introduces three significant contributions. First, we describe a dense algorithm for perspective estimation, which uses motion features to estimate the perspective distortion at each image locus and then polls all such local estimates to arrive at the globally best estimate. Second, we also present an alternative coarse algorithm that subdivides the image frame into blocks and uses motion features to derive block-specific motion characteristics and constrain the relationships between these characteristics, with the perspective estimate emerging as a result of a global optimization scheme. Third, we report the results of an evaluation using nine large sets acquired using existing closed-circuit television cameras, not installed specifically for the purposes of this paper. Our findings demonstrate that both proposed methods are successful, their accuracy matching that of human labeling using complete visual data (by the constraints of the setup unavailable to our algorithms).