791 resultados para Rule-Based Classification
Resumo:
The diagnosis of idiopathic Parkinson's disease (IPD) is entirely clinical. The fact that neuronal damage begins 5-10 years before occurrence of sub-clinical signs, underlines the importance of preclinical diagnosis. A new approach for in-vivo pathophysiological assessment of IPD-related neurodegeneration was implemented based on recently developed neuroimaging methods. It is based on non- invasive magnetic resonance data sensitive to brain tissue property changes that precede macroscopic atrophy in the early stages of IPD. This research aims to determine the brain tissue property changes induced by neurodegeneration that can be linked to clinical phenotypes which will allow us to create a predictive model for early diagnosis in IPD. We hypothesized that the degree of disease progression in IPD patients will have a differential and specific impact on brain tissue properties used to create a predictive model of motor and non-motor impairment in IPD. We studied the potential of in-vivo quantitative imaging sensitive to neurodegeneration- related brain tissue characteristics to detect changes in patients with IPD. We carried out methodological work within the well established SPM8 framework to estimate the sensitivity of tissue probability maps for automated tissue classification for detection of early IPD. We performed whole-brain multi parameter mapping at high resolution followed by voxel-based morphometric (VBM) analysis and voxel-based quantification (VBQ) comparing healthy subjects to IPD patients. We found a trend demonstrating non-significant tissue property changes in the olfactory bulb area using the MT and R1 parameter with p<0.001. Comparing to the IPD patients, the healthy group presented a bilateral higher MT and R1 intensity in this specific functional region. These results did not correlate with age, severity or duration of disease. We failed to demonstrate any changes with the R2* parameter. We interpreted our findings as demyelination of the olfactory tract, which is clinically represented as anosmia. However, the lack of correlation with duration or severity complicates its implications in the creation of a predictive model of impairment in IPD.
Resumo:
OBJECTIVE To validate terms of nursing language especially for physical-motor rehabilitation and map them to the terms of ICNP® 2.0. METHOD A methodology research based on document analysis, with collection and analysis of terms from 1,425 records. RESULTS 825 terms were obtained after the methodological procedure, of which 226 had still not been included in the ICNP® 2.0. These terms were distributed as follows: 47 on the Focus axis; 15 on the Judgment axis; 31 on the Action axis; 25 on the Location axis; 102 on the Means axis; three on the Time axis; and three on the Client axis. All non-constant terms in ICNP® have been validated by experts, having reached an agreement index ≥0.80. CONCLUSION The ICNP® is applicable and used in nursing care for physical-motor rehabilitation.
Resumo:
Studying the geographic variation of phenotypic traits can provide key information about the potential adaptive function of alternative phenotypes. Gloger's rule posits that animals should be dark-vs. light-colored in warm and humid vs. cold and dry habitats, respectively. The rule is based on the assumption that melanin pigments and/or dark coloration confer selective advantages in warm and humid regions. This rule may not apply, however, if genes for color are acting on other traits conferring fitness benefits in specific climes. Covariation between coloration and climate will therefore depend on the relative importance of coloration or melanin pigments and the genetically correlated physiological and behavioral processes that enable an animal to deal with climatic factors. The Barn Owl (Tyto alba) displays three melanin-based plumage traits, and we tested whether geographic variation in these traits at the scale of the North American continent supported Gloger's rule. An analysis of variation of pheomelanin-based reddish coloration and of the number and size of black feather spots in 1,369 museum skin specimens showed that geographic variation was correlated with ambient temperature and precipitation. Owls were darker red in color and displayed larger but fewer black feather spots in colder regions. Owls also exhibited more and larger black spots in regions where the climate was dry in winter. We propose that the associations between pigmentation and ambient temperature are of opposite sign for reddish coloration and spot size vs. the number of spots because selection exerted by climate (or a correlated variable) is plumage trait-specific or because plumage traits are genetically correlated with different adaptations.
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Resumo:
OBJECTIVES: Jean Cruveilhier has always been described as a pioneer in pathological anatomy. Almost nothing has been reported concerning his exceptional methodology allying pre-mortem clinical description and syndromic classification of neurological and neurosurgical diseases, and post-mortem meticulous dissections. Cruveilhier's methodology announced the birth of the anatomoclinical method built up by Jean-Martin Charcot and the neurological French school during the 19th century. The aim of our work is to extract the quintessence of Cruveilhier's contributions to skull base pathology through his cogent clinical descriptions coupled with exceptional lithographs of anterior skull base, suprasellar and cerebello-pontine angle tumors. METHODS: We reviewed the masterwork of Jean Cruveilhier on pathological anatomy and we selected the chapters dedicated to central nervous system pathologies, mainly skull base diseases. A systematic review was performed on Pubmed/Medline and Google Scholar using the keywords "Jean Cruveilhier", "Skull base pathology", "Anatomoclinical method". RESULTS: Among his descriptions, Cruveilhier dedicated large chapters to neurosurgical diseases including brain tumors, cerebrovascular pathologies, malformations of the central nervous system, hydrocephalus, brain infections and spinal cord compressions. CONCLUSION: This work emphasizes on the role of Jean Cruveilhier in the birth of the anatomoclinical method particularly in neuroscience during a 19th century rich of epistemological evolutions toward an evidence-based medicine, through the prism of Cruveilhier's contribution to skull base pathology.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
The emphasis on integrated care implies new incentives that promote coordinationbetween levels of care. Considering a population as a whole, the resource allocation systemhas to adapt to this environment. This research is aimed to design a model that allows formorbidity related prospective and concurrent capitation payment. The model can be applied inpublicly funded health systems and managed competition settings.Methods: We analyze the application of hybrid risk adjustment versus either prospective orconcurrent risk adjustment formulae in the context of funding total health expenditures for thepopulation of an integrated healthcare delivery organization in Catalonia during years 2004 and2005.Results: The hybrid model reimburses integrated care organizations avoiding excessive risktransfer and maximizing incentives for efficiency in the provision. At the same time, it eliminatesincentives for risk selection for a specific set of high risk individuals through the use ofconcurrent reimbursement in order to assure a proper classification of patients.Conclusion: Prospective Risk Adjustment is used to transfer the financial risk to the healthprovider and therefore provide incentives for efficiency. Within the context of a National HealthSystem, such transfer of financial risk is illusory, and the government has to cover the deficits.Hybrid risk adjustment is useful to provide the right combination of incentive for efficiency andappropriate level of risk transfer for integrated care organizations.
Resumo:
This paper considers a general and informationally efficient approach to determine the optimal access pricing rule for interconnected networks. It shows that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. The approach is informationally efficient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique rule that implements the Ramsey outcome as the unique equilibrium independently of the underlying demand conditions.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
To be diagnostically useful, structural MRI must reliably distinguish Alzheimer's disease (AD) from normal aging in individual scans. Recent advances in statistical learning theory have led to the application of support vector machines to MRI for detection of a variety of disease states. The aims of this study were to assess how successfully support vector machines assigned individual diagnoses and to determine whether data-sets combined from multiple scanners and different centres could be used to obtain effective classification of scans. We used linear support vector machines to classify the grey matter segment of T1-weighted MR scans from pathologically proven AD patients and cognitively normal elderly individuals obtained from two centres with different scanning equipment. Because the clinical diagnosis of mild AD is difficult we also tested the ability of support vector machines to differentiate control scans from patients without post-mortem confirmation. Finally we sought to use these methods to differentiate scans between patients suffering from AD from those with frontotemporal lobar degeneration. Up to 96% of pathologically verified AD patients were correctly classified using whole brain images. Data from different centres were successfully combined achieving comparable results from the separate analyses. Importantly, data from one centre could be used to train a support vector machine to accurately differentiate AD and normal ageing scans obtained from another centre with different subjects and different scanner equipment. Patients with mild, clinically probable AD and age/sex matched controls were correctly separated in 89% of cases which is compatible with published diagnosis rates in the best clinical centres. This method correctly assigned 89% of patients with post-mortem confirmed diagnosis of either AD or frontotemporal lobar degeneration to their respective group. Our study leads to three conclusions: Firstly, support vector machines successfully separate patients with AD from healthy aging subjects. Secondly, they perform well in the differential diagnosis of two different forms of dementia. Thirdly, the method is robust and can be generalized across different centres. This suggests an important role for computer based diagnostic image analysis for clinical practice.
Resumo:
INTRODUCTION: A clinical decision rule to improve the accuracy of a diagnosis of influenza could help clinicians avoid unnecessary use of diagnostic tests and treatments. Our objective was to develop and validate a simple clinical decision rule for diagnosis of influenza. METHODS: We combined data from 2 studies of influenza diagnosis in adult outpatients with suspected influenza: one set in California and one in Switzerland. Patients in both studies underwent a structured history and physical examination and had a reference standard test for influenza (polymerase chain reaction or culture). We randomly divided the dataset into derivation and validation groups and then evaluated simple heuristics and decision rules from previous studies and 3 rules based on our own multivariate analysis. Cutpoints for stratification of risk groups in each model were determined using the derivation group before evaluating them in the validation group. For each decision rule, the positive predictive value and likelihood ratio for influenza in low-, moderate-, and high-risk groups, and the percentage of patients allocated to each risk group, were reported. RESULTS: The simple heuristics (fever and cough; fever, cough, and acute onset) were helpful when positive but not when negative. The most useful and accurate clinical rule assigned 2 points for fever plus cough, 2 points for myalgias, and 1 point each for duration <48 hours and chills or sweats. The risk of influenza was 8% for 0 to 2 points, 30% for 3 points, and 59% for 4 to 6 points; the rule performed similarly in derivation and validation groups. Approximately two-thirds of patients fell into the low- or high-risk group and would not require further diagnostic testing. CONCLUSION: A simple, valid clinical rule can be used to guide point-of-care testing and empiric therapy for patients with suspected influenza.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
The principal objective of the knot theory is to provide a simple way of classifying and ordering all the knot types. Here, we propose a natural classification of knots based on their intrinsic position in the knot space that is defined by the set of knots to which a given knot can be converted by individual intersegmental passages. In addition, we characterize various knots using a set of simple quantum numbers that can be determined upon inspection of minimal crossing diagram of a knot. These numbers include: crossing number; average three-dimensional writhe; number of topological domains; and the average relaxation value