850 resultados para Local classification method
Resumo:
In this paper, we propose a system for authenticating local bee pollen against fraudulent samples using image processing and classification techniques. Our system is based on the colour properties of bee pollen loads and the use of one-class classifiers to reject unknown pollen samples. The latter classification techniques allow us to tackle the major difficulty of the problem, the existence of many possible fraudulent pollen types. Also presented is a multi-classifier model with an ambiguity discovery process to fuse the output of the one-class classifiers. The method is validated by authenticating Spanish bee pollen types, the overall accuracy of the final system of being 94%. Therefore, the system is able to rapidly reject the non-local pollen samples with inexpensive hardware and without the need to send the product to the laboratory.
Resumo:
Most of the modem developments with classification trees are aimed at improving their predictive capacity. This article considers a curiously neglected aspect of classification trees, namely the reliability of predictions that come from a given classification tree. In the sense that a node of a tree represents a point in the predictor space in the limit, the aim of this article is the development of localized assessment of the reliability of prediction rules. A classification tree may be used either to provide a probability forecast, where for each node the membership probabilities for each class constitutes the prediction, or a true classification where each new observation is predictively assigned to a unique class. Correspondingly, two types of reliability measure will be derived-namely, prediction reliability and classification reliability. We use bootstrapping methods as the main tool to construct these measures. We also provide a suite of graphical displays by which they may be easily appreciated. In addition to providing some estimate of the reliability of specific forecasts of each type, these measures can also be used to guide future data collection to improve the effectiveness of the tree model. The motivating example we give has a binary response, namely the presence or absence of a species of Eucalypt, Eucalyptus cloeziana, at a given sampling location in response to a suite of environmental covariates, (although the methods are not restricted to binary response data).
Resumo:
Text classification is essential for narrowing down the number of documents relevant to a particular topic for further pursual, especially when searching through large biomedical databases. Protein-protein interactions are an example of such a topic with databases being devoted specifically to them. This paper proposed a semi-supervised learning algorithm via local learning with class priors (LL-CP) for biomedical text classification where unlabeled data points are classified in a vector space based on their proximity to labeled nodes. The algorithm has been evaluated on a corpus of biomedical documents to identify abstracts containing information about protein-protein interactions with promising results. Experimental results show that LL-CP outperforms the traditional semisupervised learning algorithms such as SVMand it also performs better than local learning without incorporating class priors.
Resumo:
The paper describes a classification-based learning-oriented interactive method for solving linear multicriteria optimization problems. The method allows the decision makers describe their preferences with greater flexibility, accuracy and reliability. The method is realized in an experimental software system supporting the solution of multicriteria optimization problems.
Resumo:
This article presents the principal results of the Ph.D. thesis Investigation and classification of doubly resolvable designs by Stela Zhelezova (Institute of Mathematics and Informatics, BAS), successfully defended at the Specialized Academic Council for Informatics and Mathematical Modeling on 22 February 2010.
Resumo:
Overrecentdecades,remotesensinghasemergedasaneffectivetoolforimprov- ing agriculture productivity. In particular, many works have dealt with the problem of identifying characteristics or phenomena of crops and orchards on different scales using remote sensed images. Since the natural processes are scale dependent and most of them are hierarchically structured, the determination of optimal study scales is mandatory in understanding these processes and their interactions. The concept of multi-scale/multi- resolution inherent to OBIA methodologies allows the scale problem to be dealt with. But for that multi-scale and hierarchical segmentation algorithms are required. The question that remains unsolved is to determine the suitable scale segmentation that allows different objects and phenomena to be characterized in a single image. In this work, an adaptation of the Simple Linear Iterative Clustering (SLIC) algorithm to perform a multi-scale hierarchi- cal segmentation of satellite images is proposed. The selection of the optimal multi-scale segmentation for different regions of the image is carried out by evaluating the intra- variability and inter-heterogeneity of the regions obtained on each scale with respect to the parent-regions defined by the coarsest scale. To achieve this goal, an objective function, that combines weighted variance and the global Moran index, has been used. Two different kinds of experiment have been carried out, generating the number of regions on each scale through linear and dyadic approaches. This methodology has allowed, on the one hand, the detection of objects on different scales and, on the other hand, to represent them all in a sin- gle image. Altogether, the procedure provides the user with a better comprehension of the land cover, the objects on it and the phenomena occurring.
Resumo:
Congenital vertebral malformations are common in brachycephalic “screw-tailed” dog breeds such as French bulldogs, English bulldogs, Boston terriers, and Pugs. Those vertebral malformations disrupt the normal vertebral column anatomy and biomechanics, potentially leading to deformity of the vertebral column and subsequent neurological dysfunction. The initial aim of this work was to study and determine whether the congenital vertebral malformations identified in those breeds could be translated in a radiographic classification scheme used in humans to give an improved classification, with clear and well-defined terminology, with the expectation that this would facilitate future study and clinical management in the veterinary field. Therefore, two observers who were blinded to the neurologic status of the dogs classified each vertebral malformation based on the human classification scheme of McMaster and were able to translate them successfully into a new classification scheme for veterinary use. The following aim was to assess the nature and the impact of vertebral column deformity engendered by those congenital vertebral malformations in the target breeds. As no gold standard exists in veterinary medicine for the calculation of the degree of deformity, it was elected to adapt the human equivalent, termed the Cobb angle, as a potential standard reference tool for use in veterinary practice. For the validation of the Cobb angle measurement method, a computerised semi-automatic technique was used and assessed by multiple independent observers. They observed not only that Kyphosis was the most common vertebral column deformity but also that patients with such deformity were found to be more likely to suffer from neurological deficits, more especially if their Cobb angle was above 35 degrees.
Resumo:
We propose an alternative crack propagation algo- rithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algo- rithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equa- tions is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algo- rithm, we use five quasi-brittle benchmarks, all successfully solved.
Resumo:
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.
Resumo:
The present work compared the local injection of mononuclear cells to the spinal cord lateral funiculus with the alternative approach of local delivery with fibrin sealant after ventral root avulsion (VRA) and reimplantation. For that, female adult Lewis rats were divided into the following groups: avulsion only, reimplantation with fibrin sealant; root repair with fibrin sealant associated with mononuclear cells; and repair with fibrin sealant and injected mononuclear cells. Cell therapy resulted in greater survival of spinal motoneurons up to four weeks post-surgery, especially when mononuclear cells were added to the fibrin glue. Injection of mononuclear cells to the lateral funiculus yield similar results to the reimplantation alone. Additionally, mononuclear cells added to the fibrin glue increased neurotrophic factor gene transcript levels in the spinal cord ventral horn. Regarding the motor recovery, evaluated by the functional peroneal index, as well as the paw print pressure, cell treated rats performed equally well as compared to reimplanted only animals, and significantly better than the avulsion only subjects. The results herein demonstrate that mononuclear cells therapy is neuroprotective by increasing levels of brain derived neurotrophic factor (BDNF) and glial derived neurotrophic factor (GDNF). Moreover, the use of fibrin sealant mononuclear cells delivery approach gave the best and more long lasting results.
Resumo:
In this paper, space adaptivity is introduced to control the error in the numerical solution of hyperbolic systems of conservation laws. The reference numerical scheme is a new version of the discontinuous Galerkin method, which uses an implicit diffusive term in the direction of the streamlines, for stability purposes. The decision whether to refine or to unrefine the grid in a certain location is taken according to the magnitude of wavelet coefficients, which are indicators of local smoothness of the numerical solution. Numerical solutions of the nonlinear Euler equations illustrate the efficiency of the method. © Springer 2005.
Resumo:
O estudo do uso do tempo considera a heterogeneidade do envelhecimento, analisando-o multifatorialmente, permitindo vislumbrar o estilo de vida de indivíduos idosos. Este estudo objetivou descrever o uso do tempo de 75 idosas (68,04 ± 8,36 anos), através das suas atividades diárias. Instrumentos utilizados: teste de cognição (Clock Completion Test), formulário para dados pessoais e entrevista estruturada (Time Diary) para relato das atividades diárias. Para decodificação e classificação das atividades diárias, utilizou-se a classificação australiana para estudos de uso do tempo, distribuindo-as em nove grupos de atividades principais. Foram verificados os contextos físico (local) e social (parceiros sociais) das atividades. Verificou-se que grande parte do tempo destinou-se às atividades obrigatórias (atividades domésticas e de cuidados pessoais). A maior proporção do tempo livre destinou-se ao lazer passivo (assistir televisão) com pouco envolvimento em atividades físicas. A casa e estar com membros da família ou sozinhas representaram o contexto físico e social mais presentes. O estudo permitiu um vislumbre do estilo de vida do grupo. Provavelmente houve influência de fatores individuais como idade, gênero, grau de instrução, estado civil e nível socioeconômico sobre os padrões encontrados para o uso do tempo.