866 resultados para the Fuzzy Colour Segmentation Algorithm
Resumo:
"UILU-ENG 79 1747"--Cover.
Resumo:
THE STORY OF HOW FEATHERS EVOLVED IS FAR FROM OVER. IN 1868, THOMAS HUXLEY declared that dinosaurs gave rise to birds. He based his claim on Compsognathus, a 150-million-year-old dinosaur fossil from Solnhofen, Germany, whose delicate hind legs were remarkably similar to those of table fowl. The discovery seven years earlier of Archaeopteryx, a fossil bird with a long bony tail, toothed jaws and clawed fingers, had convinced many people that birds were somehow related to reptiles. But Compsognathus was the fossil that placed dinosaurs firmly in the middle of this complex evolutionary equation. Wings, claimed Huxley, must have grown out of rudimentary forelimbs. And feathers? Whether Compsognathus had them, Huxley could only guess. Nevertheless, his theory clearly required that scales had somehow transformed into feathers. The question was not just how, but why?
Resumo:
To examine the effect of an algorithm-based sedation guideline developed in a North American intensive care unit (ICU) on the duration of mechanical ventilation of patients in an Australian ICU. The intervention was tested in a pre-intervention, post-intervention comparative investigation in a 14-bed adult intensive care unit. Adult mechanically ventilated patients were selected consecutively (n =322) The pre-intervention and post-intervention groups were similar except for a higher number of patients with a neurological diagnosis in the pre-intervention group. An algorithm-based sedation guideline including a sedation scale was introduced using a multifaceted implementation strategy. The median duration of ventilation was 5.6 days in the post-intervention group, compared with 4.8 days for the pre-intervention group (P = 0.99). The length of stay was 8.2 days in the post-intervention group versus 7.1 days in the pre-intervention group (P = 0.04). There were no statistically significant differences for the other secondary outcomes, including the score on the Experience of Treatment in ICU 7 item questionnaire, number of tracheostomies and number of self-extubations. Records of compliance to recording the sedation score during both phases revealed that patients were slightly more deeply sedated when the guideline was used. The use of the algorithm-based sedation guideline did not reduce duration of mechanical ventilation in the setting of this study.
Resumo:
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The algorithm resembles back-propagation in that an error function is minimized using a gradient-based method, but the optimization is carried out in the hidden part of state space either instead of, or in addition to weight space. Computational results are presented for some simple dynamical training problems, one of which requires response to a signal 100 time steps in the past.
Resumo:
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The method resembles back-propagation in that it is a least-squares, gradient-based optimization method, but the optimization is carried out in the hidden part of state space instead of weight space. A straightforward adaptation of this method to feedforward networks offers an alternative to training by conventional back-propagation. Computational results are presented for simple dynamical training problems, with varied success. The failures appear to arise when the method converges to a chaotic attractor. A patch-up for this problem is proposed. The patch-up involves a technique for implementing inequality constraints which may be of interest in its own right.
Resumo:
The existing assignment problems for assigning n jobs to n individuals are limited to the considerations of cost or profit measured as crisp. However, in many real applications, costs are not deterministic numbers. This paper develops a procedure based on Data Envelopment Analysis method to solve the assignment problems with fuzzy costs or fuzzy profits for each possible assignment. It aims to obtain the points with maximum membership values for the fuzzy parameters while maximizing the profit or minimizing the assignment cost. In this method, a discrete approach is presented to rank the fuzzy numbers first. Then, corresponding to each fuzzy number, we introduce a crisp number using the efficiency concept. A numerical example is used to illustrate the usefulness of this new method. © 2012 Operational Research Society Ltd. All rights reserved.
Resumo:
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the a-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA. © 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper aims at development of procedures and algorithms for application of artificial intelligence tools to acquire process and analyze various types of knowledge. The proposed environment integrates techniques of knowledge and decision process modeling such as neural networks and fuzzy logic-based reasoning methods. The problem of an identification of complex processes with the use of neuro-fuzzy systems is solved. The proposed classifier has been successfully applied for building one decision support systems for solving managerial problem.
Resumo:
Decision making and technical decision analysis demand computer-aided techniques and therefore more and more support by formal techniques. In recent years fuzzy decision analysis and related techniques gained importance as an efficient method for planning and optimization applications in fields like production planning, financial and economical modeling and forecasting or classification. It is also known, that the hierarchical modeling of the situation is one of the most popular modeling method. It is shown, how to use the fuzzy hierarchical model in complex with other methods of Multiple Criteria Decision Making. We propose a novel approach to overcome the inherent limitations of Hierarchical Methods by exploiting multiple criteria decision making.
Resumo:
Background: Vigabatrin (VGB) is an anti-epileptic medication which has been linked to peripheral constriction of the visual field. Documenting the natural history associated with continued VGB exposure is important when making decisions about the risk and benefits associated with the treatment. Due to its speed the Swedish Interactive Threshold Algorithm (SITA) has become the algorithm of choice when carrying out Full Threshold automated static perimetry. SITA uses prior distributions of normal and glaucomatous visual field behaviour to estimate threshold sensitivity. As the abnormal model is based on glaucomatous behaviour this algorithm has not been validated for VGB recipients. We aim to assess the clinical utility of the SITA algorithm for accurately mapping VGB attributed field loss. Methods: The sample comprised one randomly selected eye of 16 patients diagnosed with epilepsy, exposed to VGB therapy. A clinical diagnosis of VGB attributed visual field loss was documented in 44% of the group. The mean age was 39.3 years∈±∈14.5 years and the mean deviation was -4.76 dB ±4.34 dB. Each patient was examined with the Full Threshold, SITA Standard and SITA Fast algorithm. Results: SITA Standard was on average approximately twice as fast (7.6 minutes) and SITA Fast approximately 3 times as fast (4.7 minutes) as examinations completed using the Full Threshold algorithm (15.8 minutes). In the clinical environment, the visual field outcome with both SITA algorithms was equivalent to visual field examination using the Full Threshold algorithm in terms of visual inspection of the grey scale plots, defect area and defect severity. Conclusions: Our research shows that both SITA algorithms are able to accurately map visual field loss attributed to VGB. As patients diagnosed with epilepsy are often vulnerable to fatigue, the time saving offered by SITA Fast means that this algorithm has a significant advantage for use with VGB recipients.
Resumo:
The Dendritic Cell Algorithm is an immune-inspired algorithm originally based on the function of natural dendritic cells. The original instantiation of the algorithm is a highly stochastic algorithm. While the performance of the algorithm is good when applied to large real-time datasets, it is difficult to analyse due to the number of random-based elements. In this paper a deterministic version of the algorithm is proposed, implemented and tested using a port scan dataset to provide a controllable system. This version consists of a controllable amount of parameters, which are experimented with in this paper. In addition the effects are examined of the use of time windows and variation on the number of cells, both which are shown to influence the algorithm. Finally a novel metric for the assessment of the algorithms output is introduced and proves to be a more sensitive metric than the metric used with the original Dendritic Cell Algorithm.
Resumo:
Background In 2009 Malawi introduced a new protocol to screen potential blood donors for anaemia, using the WHO Haemoglobin Colour Scale (HCS) for initial screening. Published studies of the accuracy of the HCS to screen potential blood donors show varying levels of accuracy and opinion varies whether this is an appropriate screening test. The aim of the study was to assess the validity of the HCS, as a screening test, by comparison to HemoCue in potential blood donors in Malawi. Study design and Methods This was a blinded prospective study in potential blood donors aged over 18 years, at Malawi Blood Transfusion Service in Blantyre, Malawi. Capillary blood samples were analysed using the HCS and HemoCue, independent of each other. The sensitivity and specificity of correctly identifying ineligible blood donors (Hb≤12g/dL) were calculated. Results From 242 participants 234 (96.7%) were correctly allocated and 8 (3.3%), were wrongly allocated on the basis of the Haemoglobin Colour Scale (HCS) compared to HemoCue, all were subjects that were wrongly accepted as donors when their haemoglobin results were ≤12.0g/dL. This gave a sensitivity of 100% and specificity of 96.7% to detect donor eligibilty. The negative predictive value of the HCS was 100% but the positive predictive value to identify ineligible donors on the basis of anaemia was only 20%. Conclusions Initial screening with the HCS correctly predicts eligibility for blood donation in the majority of potential blood donors at considerable cost saving compared with use of HemoCue as the first line anaemia screening test, however, by this method a small number of anaemic patients were allowed to donate blood.
Resumo:
Oscillometric blood pressure (BP) monitors are currently used to diagnose hypertension both in home and clinical settings. These monitors take BP measurements once every 15 minutes over a 24 hour period and provide a reliable and accurate system that is minimally invasive. Although intermittent cuff measurements have proven to be a good indicator of BP, a continuous BP monitor is highly desirable for the diagnosis of hypertension and other cardiac diseases. However, no such devices currently exist. A novel algorithm has been developed based on the Pulse Transit Time (PTT) method, which would allow non-invasive and continuous BP measurement. PTT is defined as the time it takes the BP wave to propagate from the heart to a specified point on the body. After an initial BP measurement, PTT algorithms can track BP over short periods of time, known as calibration intervals. After this time has elapsed, a new BP measurement is required to recalibrate the algorithm. Using the PhysioNet database as a basis, the new algorithm was developed and tested using 15 patients, each tested 3 times over a period of 30 minutes. The predicted BP of the algorithm was compared to the arterial BP of each patient. It has been established that this new algorithm is capable of tracking BP over 12 minutes without the need for recalibration, using the BHS standard, a 100% improvement over what has been previously identified. The algorithm was incorporated into a new system based on its requirements and was tested using three volunteers. The results mirrored those previously observed, providing accurate BP measurements when a 12 minute calibration interval was used. This new system provides a significant improvement to the existing method allowing BP to be monitored continuously and non-invasively, on a beat-to-beat basis over 24 hours, adding major clinical and diagnostic value.