982 resultados para Classification Rules


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We conducted a qualitative, multicenter study using a focus group design to explore the lived experiences of persons with any kind of primary sleep disorder with regard to functioning and contextual factors using six open-ended questions related to the International Classification of Functioning, Disability and Health (ICF) components. We classified the results using the ICF as a frame of reference. We identified the meaningful concepts within the transcribed data and then linked them to ICF categories according to established linking rules. The six focus groups with 27 participants yielded a total of 6986 relevant concepts, which were linked to a total of 168 different second-level ICF categories. From the patient perspective, the ICF components: (1) Body Functions; (2) Activities & Participation; and (3) Environmental Factors were equally represented; while (4) Body Structures appeared poignantly less frequently. Out of the total number of concepts, 1843 concepts (26%) were assigned to the ICF component Personal Factors, which is not yet classified but could indicate important aspects of resource management and strategy development of those who have a sleep disorder. Therefore, treatment of patients with sleep disorders must not be limited to anatomical and (patho-)physiological changes, but should also consider a more comprehensive view that includes patient's demands, strategies and resources in daily life and the contextual circumstances surrounding the individual.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The classification of neuroendocrine neoplasms (NENs) has been evolving steadily over the last decades. Important prognostic factors of NENs are their proliferative activity and presence/absence of necrosis. These factors are reported in NENs of all body sites; however, the terminology as well as the exact rules of classification differ according to the location of the primary tumor. Only in gastroenteropancreatic (GEP) NENs a formal grading is performed. This grading is based on proliferation assessed by the mitotic count and/or Ki-67 proliferation index. In the lung, NEN grading is an intrinsic part of the tumor designation with typical carcinoids corresponding to neuroendocrine tumor (NET) G1 and atypical carcinoids to NET G2; however, the presence or absence of necrotic foci is as important as proliferation for the differentiation between typical and atypical carcinoids. Immunohistochemical markers can be used to demonstrate neuroendocrine differentiation. Synaptophysin and chromogranin A are, to date, the most reliable and most commonly used for this purpose. Beyond this, other markers can be helpful, for example in the situation of a NET metastasis of unknown primary, where a hormonal profile or a panel of transcription factors can give hints to the primary site. Many immunohistochemical markers have been shown to correlate with prognosis but are not used in clinical practice, for example cytokeratin 19 and KIT expression in pancreatic NETs. There is no predictive biomarker in use, with the exception of somatostatin receptor (SSTR) 2 expression for predicting the amenability of a tumor to in vivo SSTR targeting for imaging or therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: (i) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. (ii) The inclusion probabilities must be: (a) knowable for nonsampled units and (b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne Very High Resolution (VHR) images, where: (I) an original Categorical Variable Pair Similarity Index (CVPSI, proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and (II) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic MapperT (SIAMT) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAMT by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAMT pre-classification maps proposed in this contribution, together with OQIs claimed for SIAMT by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAMT software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems (GEOSS) initiative and the QA4EO international guidelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines empirically the impacts of sharing rules of origin (RoOs) with other ASEAN+1 free trade agreements (FTAs) on ASEAN-Korea FTA/ASEAN-China FTA utilization in Thai exports in 2011. Our careful empirical analysis suggests that the harmonization of RoOs across FTAs play some role in reducing the costs yielded through the spaghetti bowl phenomenon. In particular, the harmonization to "change-in-tariff classification (CTC) or real value-added content (RVC)" will play a relatively positive role in not seriously discouraging firms’ use of multiple FTA schemes. On the other hand, the harmonization to CTC or CTC&RVC hinders firms from using those schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accompanied by "Revision no. 1- " ( v.) Published: [Springfield, 1939- ]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Effective July 28, 1980."--Cover.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Description based on: 1911.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"The preliminary American second edition of A.L.A. catalog rules, on Part I of which the present volume is based, was prepared by: American Library Association, Catalog Code Revision Committee." The 1st ed., published in 1908, has title: Catalog rules, author and title entries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the modem developments with classification trees are aimed at improving their predictive capacity. This article considers a curiously neglected aspect of classification trees, namely the reliability of predictions that come from a given classification tree. In the sense that a node of a tree represents a point in the predictor space in the limit, the aim of this article is the development of localized assessment of the reliability of prediction rules. A classification tree may be used either to provide a probability forecast, where for each node the membership probabilities for each class constitutes the prediction, or a true classification where each new observation is predictively assigned to a unique class. Correspondingly, two types of reliability measure will be derived-namely, prediction reliability and classification reliability. We use bootstrapping methods as the main tool to construct these measures. We also provide a suite of graphical displays by which they may be easily appreciated. In addition to providing some estimate of the reliability of specific forecasts of each type, these measures can also be used to guide future data collection to improve the effectiveness of the tree model. The motivating example we give has a binary response, namely the presence or absence of a species of Eucalypt, Eucalyptus cloeziana, at a given sampling location in response to a suite of environmental covariates, (although the methods are not restricted to binary response data).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systematic protocols that use decision rules or scores arc, seen to improve consistency and transparency in classifying the conservation status of species. When applying these protocols, assessors are typically required to decide on estimates for attributes That are inherently uncertain, Input data and resulting classifications are usually treated as though they arc, exact and hence without operator error We investigated the impact of data interpretation on the consistency of protocols of extinction risk classifications and diagnosed causes of discrepancies when they occurred. We tested three widely used systematic classification protocols employed by the World Conservation Union, NatureServe, and the Florida Fish and Wildlife Conservation Commission. We provided 18 assessors with identical information for 13 different species to infer estimates for each of the required parameters for the three protocols. The threat classification of several of the species varied from low risk to high risk, depending on who did the assessment. This occurred across the three Protocols investigated. Assessors tended to agree on their placement of species in the highest (50-70%) and lowest risk categories (20-40%), but There was poor agreement on which species should be placed in the intermediate categories, Furthermore, the correspondence between The three classification methods was unpredictable, with large variation among assessors. These results highlight the importance of peer review and consensus among multiple assessors in species classifications and the need to be cautious with assessments carried out 4), a single assessor Greater consistency among assessors requires wide use of training manuals and formal methods for estimating parameters that allow uncertainties to be represented, carried through chains of calculations, and reported transparently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the classification of bi-partite pure entangled states when local quantum operations are restricted, e.g., constrained by local superselection rules, yields a structure that is analogous in many respects to that of mixed-state entanglement, including such exotic phenomena as bound entanglement and activation. This analogy aids in resolving several conceptual puzzles in the study of entanglement under restricted operations. Specifically, we demonstrate that several types of quantum optical states that possess confusing entanglement properties are analogous to bound entangled states. Also, the classification of pure-state entanglement under restricted operations can be much simpler than for mixed state entanglement. For instance, in the case of local Abelian superselection rules all questions concerning distillability can be resolved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an investigation into the application of methods of uncertain reasoning to the biological classification of river water quality. Existing biological methods for reporting river water quality are critically evaluated, and the adoption of a discrete biological classification scheme advocated. Reasoning methods for managing uncertainty are explained, in which the Bayesian and Dempster-Shafer calculi are cited as primary numerical schemes. Elicitation of qualitative knowledge on benthic invertebrates is described. The specificity of benthic response to changes in water quality leads to the adoption of a sensor model of data interpretation, in which a reference set of taxa provide probabilistic support for the biological classes. The significance of sensor states, including that of absence, is shown. Novel techniques of directly eliciting the required uncertainty measures are presented. Bayesian and Dempster-Shafer calculi were used to combine the evidence provided by the sensors. The performance of these automatic classifiers was compared with the expert's own discrete classification of sampled sites. Variations of sensor data weighting, combination order and belief representation were examined for their effect on classification performance. The behaviour of the calculi under evidential conflict and alternative combination rules was investigated. Small variations in evidential weight and the inclusion of evidence from sensors absent from a sample improved classification performance of Bayesian belief and support for singleton hypotheses. For simple support, inclusion of absent evidence decreased classification rate. The performance of Dempster-Shafer classification using consonant belief functions was comparable to Bayesian and singleton belief. Recommendations are made for further work in biological classification using uncertain reasoning methods, including the combination of multiple-expert opinion, the use of Bayesian networks, and the integration of classification software within a decision support system for water quality assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transition P Systems are a parallel and distributed computational model based on the notion of the cellular membrane structure. Each membrane determines a region that encloses a multiset of objects and evolution rules. Transition P Systems evolve through transitions between two consecutive configurations that are determined by the membrane structure and multisets present inside membranes. Moreover, transitions between two consecutive configurations are provided by an exhaustive non-deterministic and parallel application of active evolution rules subset inside each membrane of the P system. But, to establish the active evolution rules subset, it is required the previous calculation of useful and applicable rules. Hence, computation of applicable evolution rules subset is critical for the whole evolution process efficiency, because it is performed in parallel inside each membrane in every evolution step. The work presented here shows advantages of incorporating decision trees in the evolution rules applicability algorithm. In order to it, necessary formalizations will be presented to consider this as a classification problem, the method to obtain the necessary decision tree automatically generated and the new algorithm for applicability based on it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major drawback of artificial neural networks is their black-box character. Therefore, the rule extraction algorithm is becoming more and more important in explaining the extracted rules from the neural networks. In this paper, we use a method that can be used for symbolic knowledge extraction from neural networks, once they have been trained with desired function. The basis of this method is the weights of the neural network trained. This method allows knowledge extraction from neural networks with continuous inputs and output as well as rule extraction. An example of the application is showed. This example is based on the extraction of average load demand of a power plant.