930 resultados para classification algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Wiener system is a linear time-invariant filter, followed by an invertible nonlinear distortion. Assuming that the input signal is an independent and identically distributed (iid) sequence, we propose an algorithm for estimating the input signal only by observing the output of the Wiener system. The algorithm is based on minimizing the mutual information of the output samples, by means of a steepest descent gradient approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a very simple method for increasing the algorithm speed for separating sources from PNL mixtures or invertingWiener systems. The method is based on a pertinent initialization of the inverse system, whose computational cost is very low. The nonlinear part is roughly approximated by pushing the observations to be Gaussian; this method provides a surprisingly good approximation even when the basic assumption is not fully satisfied. The linear part is initialized so that outputs are decorrelated. Experiments shows the impressive speed improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose the use of the independent component analysis (ICA) [1] technique for improving the classification rate of decision trees and multilayer perceptrons [2], [3]. The use of an ICA for the preprocessing stage, makes the structure of both classifiers simpler, and therefore improves the generalization properties. The hypothesis behind the proposed preprocessing is that an ICA analysis will transform the feature space into a space where the components are independent, and aligned to the axes and therefore will be more adapted to the way that a decision tree is constructed. Also the inference of the weights of a multilayer perceptron will be much easier because the gradient search in the weight space will follow independent trajectories. The result is that classifiers are less complex and on some databases the error rate is lower. This idea is also applicable to regression

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diagnosis of community acquired legionella pneumonia (CALP) is currently performed by means of laboratory techniques which may delay diagnosis several hours. To determine whether ANN can categorize CALP and non-legionella community-acquired pneumonia (NLCAP) and be standard for use by clinicians, we prospectively studied 203 patients with community-acquired pneumonia (CAP) diagnosed by laboratory tests. Twenty one clinical and analytical variables were recorded to train a neural net with two classes (LCAP or NLCAP class). In this paper we deal with the problem of diagnosis, feature selection, and ranking of the features as a function of their classification importance, and the design of a classifier the criteria of maximizing the ROC (Receiving operating characteristics) area, which gives a good trade-off between true positives and false negatives. In order to guarantee the validity of the statistics; the train-validation-test databases were rotated by the jackknife technique, and a multistarting procedure was done in order to make the system insensitive to local maxima.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Commission on Classification and Terminology and the Commission on Epidemiology of the International League Against Epilepsy (ILAE) have charged a Task Force to revise concepts, definition, and classification of status epilepticus (SE). The proposed new definition of SE is as follows: Status epilepticus is a condition resulting either from the failure of the mechanisms responsible for seizure termination or from the initiation of mechanisms, which lead to abnormally, prolonged seizures (after time point t1 ). It is a condition, which can have long-term consequences (after time point t2 ), including neuronal death, neuronal injury, and alteration of neuronal networks, depending on the type and duration of seizures. This definition is conceptual, with two operational dimensions: the first is the length of the seizure and the time point (t1 ) beyond which the seizure should be regarded as "continuous seizure activity." The second time point (t2 ) is the time of ongoing seizure activity after which there is a risk of long-term consequences. In the case of convulsive (tonic-clonic) SE, both time points (t1 at 5 min and t2 at 30 min) are based on animal experiments and clinical research. This evidence is incomplete, and there is furthermore considerable variation, so these time points should be considered as the best estimates currently available. Data are not yet available for other forms of SE, but as knowledge and understanding increase, time points can be defined for specific forms of SE based on scientific evidence and incorporated into the definition, without changing the underlying concepts. A new diagnostic classification system of SE is proposed, which will provide a framework for clinical diagnosis, investigation, and therapeutic approaches for each patient. There are four axes: (1) semiology; (2) etiology; (3) electroencephalography (EEG) correlates; and (4) age. Axis 1 (semiology) lists different forms of SE divided into those with prominent motor systems, those without prominent motor systems, and currently indeterminate conditions (such as acute confusional states with epileptiform EEG patterns). Axis 2 (etiology) is divided into subcategories of known and unknown causes. Axis 3 (EEG correlates) adopts the latest recommendations by consensus panels to use the following descriptors for the EEG: name of pattern, morphology, location, time-related features, modulation, and effect of intervention. Finally, axis 4 divides age groups into neonatal, infancy, childhood, adolescent and adulthood, and elderly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adult and pediatric laryngotracheal stenoses (LTS) comprise a wide array of various conditions that require precise preoperative assessment and classification to improve comparison of different therapeutic modalities in a matched series of patients. This consensus paper of the European Laryngological Society proposes a five-step endoscopic airway assessment and a standardized reporting system to better differentiate fresh, incipient from mature, cicatricial LTSs, simple one-level from complex multilevel LTSs and finally "healthy" from "severely morbid" patients. The proposed scoring system, which integrates all of these parameters, may be used to help define different groups of LTS patients, choose the best treatment modality for each individual patient and assess distinct post-treatment outcomes accordingly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION: The decline of malaria and scale-up of rapid diagnostic tests calls for a revision of IMCI. A new algorithm (ALMANACH) running on mobile technology was developed based on the latest evidence. The objective was to ensure that ALMANACH was safe, while keeping a low rate of antibiotic prescription. METHODS: Consecutive children aged 2-59 months with acute illness were managed using ALMANACH (2 intervention facilities), or standard practice (2 control facilities) in Tanzania. Primary outcomes were proportion of children cured at day 7 and who received antibiotics on day 0. RESULTS: 130/842 (15∙4%) in ALMANACH and 241/623 (38∙7%) in control arm were diagnosed with an infection in need for antibiotic, while 3∙8% and 9∙6% had malaria. 815/838 (97∙3%;96∙1-98.4%) were cured at D7 using ALMANACH versus 573/623 (92∙0%;89∙8-94∙1%) using standard practice (p<0∙001). Of 23 children not cured at D7 using ALMANACH, 44% had skin problems, 30% pneumonia, 26% upper respiratory infection and 13% likely viral infection at D0. Secondary hospitalization occurred for one child using ALMANACH and one who eventually died using standard practice. At D0, antibiotics were prescribed to 15∙4% (12∙9-17∙9%) using ALMANACH versus 84∙3% (81∙4-87∙1%) using standard practice (p<0∙001). 2∙3% (1∙3-3.3) versus 3∙2% (1∙8-4∙6%) received an antibiotic secondarily. CONCLUSION: Management of children using ALMANACH improve clinical outcome and reduce antibiotic prescription by 80%. This was achieved through more accurate diagnoses and hence better identification of children in need of antibiotic treatment or not. The building on mobile technology allows easy access and rapid update of the decision chart. TRIAL REGISTRATION: Pan African Clinical Trials Registry PACTR201011000262218.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIMS: c-Met is an emerging biomarker in pancreatic ductal adenocarcinoma (PDAC); there is no consensus regarding the immunostaining scoring method for this marker. We aimed to assess the prognostic value of c-Met overexpression in resected PDAC, and to elaborate a robust and reproducible scoring method for c-Met immunostaining in this setting. METHODS AND RESULTS: c-Met immunostaining was graded according to the validated MetMab score, a classic visual scale combining surface and intensity (SI score), or a simplified score (high c-Met: ≥20% of tumour cells with strong membranous staining), in stage I-II PDAC. A computer-assisted classification method (Aperio software) was developed. Clinicopathological parameters were correlated with disease-free survival (DFS) and overall survival(OS). One hundred and forty-nine patients were analysed retrospectively in a two-step process. Thirty-seven samples (whole slides) were analysed as a pre-run test. Reproducibility values were optimal with the simplified score (kappa = 0.773); high c-Met expression (7/37) was associated with shorter DFS [hazard ratio (HR) 3.456, P = 0.0036] and OS (HR 4.257, P = 0.0004). c-Met expression was concordant on whole slides and tissue microarrays in 87.9% of samples, and quantifiable with a specific computer-assisted algorithm. In the whole cohort (n = 131), patients with c-Met(high) tumours (36/131) had significantly shorter DFS (9.3 versus 20.0 months, HR 2.165, P = 0.0005) and OS (18.2 versus 35.0 months, HR 1.832, P = 0.0098) in univariate and multivariate analysis. CONCLUSIONS: Simplified c-Met expression is an independent prognostic marker in stage I-II PDAC that may help to identify patients with a high risk of tumour relapse and poor survival.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of the study is to form a framework that provides tools to recognise and classify items whose demand is not smooth but varies highly on size and/or frequency. The framework will then be combined with two other classification methods in order to form a three-dimensional classification model. Forecasting and inventory control of these abnormal demand items is difficult. Therefore another object of this study is to find out which statistical forecasting method is most suitable for forecasting of abnormal demand items. The accuracy of different methods is measured by comparing the forecast to the actual demand. Moreover, the study also aims at finding proper alternatives to the inventory control of abnormal demand items. The study is quantitative and the methodology is a case study. The research methods consist of theory, numerical data, current state analysis and testing of the framework in case company. The results of the study show that the framework makes it possible to recognise and classify the abnormal demand items. It is also noticed that the inventory performance of abnormal demand items differs significantly from the performance of smoothly demanded items. This makes the recognition of abnormal demand items very important.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fossil crown wasp Electrostephanus petiolatus Brues comb. rev.(Stephanidae, Electrostephaninae) is re-described from a single male preserved in middle Eocene Baltic Amber. The holotype was lost or destroyed around the time of World War II and subsequent interpretations of its identity have been based solely on the brief descriptive comments provided by Brues in his original account. The new specimen matches the original description and illustration provided by Brues in every detail and we hereby consider them to be conspecific, selecting the specimen as a neotype for the purpose of stabilizing the nomenclature for this fossil species. This neotype exhibits a free first metasomal tergum and sternum, contrary to the assertion of previous workers who indicated these to be fused. Accordingly, this species does indeed belong to the genus Electrostephanus Brues rather than to Denaeostephanus Engel & Grimaldi (Stephaninae). Electrostephanus petiolatus is transferred to a new subgenus, Electrostephanodes n. subgen. , based on its elongate pseudo- petiole and slender gaster, but may eventually warrant generic status as the phylogenetic placement of these fossil lineages continues to be clarifi ed. A revised key to the Baltic amber crown wasps is provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.