952 resultados para evaluation algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermodynamics Conference 2013 (Statistical Mechanics and Thermodynamics Group of the Royal Society of Chemistry), The University of Manchester, 3-6 September 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a new second-order method of texture analysis called Adaptive Multi-Scale Grey Level Co-occurrence Matrix (AMSGLCM), based on the well-known Grey Level Co-occurrence Matrix (GLCM) method. The method deviates significantly from GLCM in that features are extracted, not via a fixed 2D weighting function of co-occurrence matrix elements, but by a variable summation of matrix elements in 3D localized neighborhoods. We subsequently present a new methodology for extracting optimized, highly discriminant features from these localized areas using adaptive Gaussian weighting functions. Genetic Algorithm (GA) optimization is used to produce a set of features whose classification worth is evaluated by discriminatory power and feature correlation considerations. We critically appraised the performance of our method and GLCM in pairwise classification of images from visually similar texture classes, captured from Markov Random Field (MRF) synthesized, natural, and biological origins. In these cross-validated classification trials, our method demonstrated significant benefits over GLCM, including increased feature discriminatory power, automatic feature adaptability, and significantly improved classification performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To maximise data output from single-shot astronomical images, the rejection of cosmic rays is important. We present the results of a benchmark trial comparing various cosmic ray rejection algorithms. The procedures assess relative performances and characteristics of the processes in cosmic ray detection, rates of false detections of true objects, and the quality of image cleaning and reconstruction. The cosmic ray rejection algorithms developed by Rhoads (2000, PASP, 112, 703), van Dokkum (2001, PASP, 113, 1420), Pych (2004, PASP, 116, 148), and the IRAF task xzap by Dickinson are tested using both simulated and real data. It is found that detection efficiency is independent of the density of cosmic rays in an image, being more strongly affected by the density of real objects in the field. As expected, spurious detections and alterations to real data in the cleaning process are also significantly increased by high object densities. We find the Rhoads' linear filtering method to produce the best performance in the detection of cosmic ray events; however, the popular van Dokkum algorithm exhibits the highest overall performance in terms of detection and cleaning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To examine the effect of an algorithm-based sedation guideline developed in a North American intensive care unit (ICU) on the duration of mechanical ventilation of patients in an Australian ICU. The intervention was tested in a pre-intervention, post-intervention comparative investigation in a 14-bed adult intensive care unit. Adult mechanically ventilated patients were selected consecutively (n =322) The pre-intervention and post-intervention groups were similar except for a higher number of patients with a neurological diagnosis in the pre-intervention group. An algorithm-based sedation guideline including a sedation scale was introduced using a multifaceted implementation strategy. The median duration of ventilation was 5.6 days in the post-intervention group, compared with 4.8 days for the pre-intervention group (P = 0.99). The length of stay was 8.2 days in the post-intervention group versus 7.1 days in the pre-intervention group (P = 0.04). There were no statistically significant differences for the other secondary outcomes, including the score on the Experience of Treatment in ICU 7 item questionnaire, number of tracheostomies and number of self-extubations. Records of compliance to recording the sedation score during both phases revealed that patients were slightly more deeply sedated when the guideline was used. The use of the algorithm-based sedation guideline did not reduce duration of mechanical ventilation in the setting of this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Leximancer system is a relatively new method for transforming lexical co-occurrence information from natural language into semantic patterns in an unsupervised manner. It employs two stages of co-occurrence information extraction-semantic and relational-using a different algorithm for each stage. The algorithms used are statistical, but they employ nonlinear dynamics and machine learning. This article is an attempt to validate the output of Leximancer, using a set of evaluation criteria taken from content analysis that are appropriate for knowledge discovery tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The description and evaluation of the performance of a new real-time seizure detection algorithm in the newborn infant. Methods: The algorithm includes parallel fragmentation of EEG signal into waves; wave-feature extraction and averaging; elementary, preliminary and final detection. The algorithm detects EEG waves with heightened regularity, using wave intervals, amplitudes and shapes. The performance of the algorithm was assessed with the use of event-based and liberal and conservative time-based approaches and compared with the performance of Gotman's and Liu's algorithms. Results: The algorithm was assessed on multi-channel EEG records of 55 neonates including 17 with seizures. The algorithm showed sensitivities ranging 83-95% with positive predictive values (PPV) 48-77%. There were 2.0 false positive detections per hour. In comparison, Gotman's algorithm (with 30 s gap-closing procedure) displayed sensitivities of 45-88% and PPV 29-56%; with 7.4 false positives per hour and Liu's algorithm displayed sensitivities of 96-99%, and PPV 10-25%; with 15.7 false positives per hour. Conclusions: The wave-sequence analysis based algorithm displayed higher sensitivity, higher PPV and a substantially lower level of false positives than two previously published algorithms. Significance: The proposed algorithm provides a basis for major improvements in neonatal seizure detection and monitoring. Published by Elsevier Ireland Ltd. on behalf of International Federation of Clinical Neurophysiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic Term Recognition (ATR) is a fundamental processing step preceding more complex tasks such as semantic search and ontology learning. From a large number of methodologies available in the literature only a few are able to handle both single and multi-word terms. In this paper we present a comparison of five such algorithms and propose a combined approach using a voting mechanism. We evaluated the six approaches using two different corpora and show how the voting algorithm performs best on one corpus (a collection of texts from Wikipedia) and less well using the Genia corpus (a standard life science corpus). This indicates that choice and design of corpus has a major impact on the evaluation of term recognition algorithms. Our experiments also showed that single-word terms can be equally important and occupy a fairly large proportion in certain domains. As a result, algorithms that ignore single-word terms may cause problems to tasks built on top of ATR. Effective ATR systems also need to take into account both the unstructured text and the structured aspects and this means information extraction techniques need to be integrated into the term recognition process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A theoretical model is presented which describes selection in a genetic algorithm (GA) under a stochastic fitness measure and correctly accounts for finite population effects. Although this model describes a number of selection schemes, we only consider Boltzmann selection in detail here as results for this form of selection are particularly transparent when fitness is corrupted by additive Gaussian noise. Finite population effects are shown to be of fundamental importance in this case, as the noise has no effect in the infinite population limit. In the limit of weak selection we show how the effects of any Gaussian noise can be removed by increasing the population size appropriately. The theory is tested on two closely related problems: the one-max problem corrupted by Gaussian noise and generalization in a perceptron with binary weights. The averaged dynamics can be accurately modelled for both problems using a formalism which describes the dynamics of the GA using methods from statistical mechanics. The second problem is a simple example of a learning problem and by considering this problem we show how the accurate characterization of noise in the fitness evaluation may be relevant in machine learning. The training error (negative fitness) is the number of misclassified training examples in a batch and can be considered as a noisy version of the generalization error if an independent batch is used for each evaluation. The noise is due to the finite batch size and in the limit of large problem size and weak selection we show how the effect of this noise can be removed by increasing the population size. This allows the optimal batch size to be determined, which minimizes computation time as well as the total number of training examples required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dry eye is a common yet complex condition. Intrinsic and extrinsic factors can cause dysfunction of the lids, lacrimal glands, meibomian glands, ocular surface cells, or neural network. These problems would ultimately be expressed at the tear film-ocular surface interface. The manifestations of these problems are experienced as symptoms such as grittiness, discomfort, burning sensation, hyperemia, and secondary epiphora in some cases. Accurate investigation of dry eye is crucial to correct management of the condition. Techniques can be classed according to their investigation of tear production, tear stability, and surface damage (including histological tests). The application, validity, reliability, compatibility, protocols, and indications for these are important. The use of a diagnostic algorithm may lead to more accurate diagnosis and management. The lack of correlation between signs and symptoms seems to favor tear film osmolarity, an objective biomarker, as the best current clue to correct diagnosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Link adaptation is a critical component of IEEE 802.11 systems. In this paper, we analytically model a retransmission based Auto Rate Fallback (ARF) link adaptation algorithm. Both packet collisions and packet corruptions are modeled with the algorithm. The models can provide insights into the dynamics of the link adaptation algorithms and configuration of algorithms parameters. It is also observed that when the competing number of stations is high, packet collisions can largely affected the performance of ARF and make ARF operate with the lowest date rate, even when no packet corruption occur. This is in contrast to the existing assumption that packet collision will not affect the correct operation of ARF and can be ignored in the evaluation of ARF. The work presented in this paper can provide guidelines on configuring the link adaptation algorithms and designing new link adaptation algorithms for future high speed 802.11 systems. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we discuss a possibility to use genetic algorithms in cryptanalysis. We developed and described the genetic algorithm for finding the secret key of a block permutation cipher. In this case key is a permutation of some first natural numbers. Our algorithm finds the exact key’s length and the key with controlled accuracy. Evaluation of conducted experiment’s results shows that the almost automatic cryptanalysis is possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is a crucial task to evaluate the reliability of manufacturing process in product development process. Process reliability is a measurement of production ability of reconfigurable manufacturing system (RMS), which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. An integration framework of manufacturing process reliability evaluation is presented together with product development process. A mathematical model and algorithm based on universal generating function (UGF) is developed for calculating the reliability of manufacturing process with respect to task intensity and process capacity, which are both independent random variables. The rework strategies of RMS are analyzed under different task intensity based on process reliability is presented, and the optimization of rework strategies based on process reliability is discussed afterwards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.