812 resultados para Boolean-like laws. Fuzzy implications. Fuzzy rule based systens. Fuzzy set theories
Resumo:
In order to develop applications for z;isual interpretation of medical images, the early detection and evaluation of microcalcifications in digital mammograms is verg important since their presence is often associated with a high incidence of breast cancers. Accurate classification into benign and malignant groups would help improve diagnostic sensitivity as well as reduce the number of unnecessa y biopsies. The challenge here is the selection of the useful features to distinguish benign from malignant micro calcifications. Our purpose in this work is to analyse a microcalcification evaluation method based on a set of shapebased features extracted from the digitised mammography. The segmentation of the microcalcifications is performed using a fixed-tolerance region growing method to extract boundaries of calcifications with manually selected seed pixels. Taking into account that shapes and sizes of clustered microcalcifications have been associated with a high risk of carcinoma based on digerent subjective measures, such as whether or not the calcifications are irregular, linear, vermiform, branched, rounded or ring like, our efforts were addressed to obtain a feature set related to the shape. The identification of the pammeters concerning the malignant character of the microcalcifications was performed on a set of 146 mammograms with their real diagnosis known in advance from biopsies. This allowed identifying the following shape-based parameters as the relevant ones: Number of clusters, Number of holes, Area, Feret elongation, Roughness, and Elongation. Further experiments on a set of 70 new mammogmms showed that the performance of the classification scheme is close to the mean performance of three expert radiologists, which allows to consider the proposed method for assisting the diagnosis and encourages to continue the investigation in the sense of adding new features not only related to the shape
Resumo:
In an essay on anger, the ancient philosopher Seneca warns of the futility of harboring negative emotions given the imminence of death—the ultimate human equalizer. Ancient philosophers like Seneca believed that emotions are based on cognitions (beliefs) and are therefore modifiable through spiritual exercises. Modern research shows that the emotional and cognitive aspects of human psychology are malleable (nurture), but also require gene expression (nature). A parallel between individual behavior and socio-political forces suggests a framework for the current environmental crisis— another human equalizer. Two critical questions are suggested: Is the amassed experience of the last few centuries sufficient to lead to corrective measures that would avoid environmental degradation? Or would a catastrophic event with significant longterm environmental degradation have to occur before corrective measures reach consensus at the socio-political level?
Resumo:
La present tesi pretén recollir l'experiència viscuda en desenvolupar un sistema supervisor intel·ligent per a la millora de la gestió de plantes depuradores d'aigües residuals., implementar-lo en planta real (EDAR Granollers) i avaluar-ne el funcionament dia a dia amb situacions típiques de la planta. Aquest sistema supervisor combina i integra eines de control clàssic de les plantes depuradores (controlador automàtic del nivell d'oxigen dissolt al reactor biològic, ús de models descriptius del procés...) amb l'aplicació d'eines del camp de la intel·ligència artificial (sistemes basats en el coneixement, concretament sistemes experts i sistemes basats en casos, i xarxes neuronals). Aquest document s'estructura en 9 capítols diferents. Hi ha una primera part introductòria on es fa una revisió de l'estat actual del control de les EDARs i s'explica el perquè de la complexitat de la gestió d'aquests processos (capítol 1). Aquest capítol introductori juntament amb el capítol 2, on es pretén explicar els antecedents d'aquesta tesi, serveixen per establir els objectius d'aquest treball (capítol 3). A continuació, el capítol 4 descriu les peculiaritats i especificitats de la planta que s'ha escollit per implementar el sistema supervisor. Els capítols 5 i 6 del present document exposen el treball fet per a desenvolupar el sistema basat en regles o sistema expert (capítol 6) i el sistema basat en casos (capítol 7). El capítol 8 descriu la integració d'aquestes dues eines de raonament en una arquitectura multi nivell distribuïda. Finalment, hi ha una darrer capítol que correspon a la avaluació (verificació i validació), en primer lloc, de cadascuna de les eines per separat i, posteriorment, del sistema global en front de situacions reals que es donin a la depuradora
Resumo:
La implantació de Sistemes de Suport a la presa de Decisions (SSD) en Estacions Depuradores d'Aigües Residuals Urbanes (EDAR) facilita l'aplicació de tècniques més eficients basades en el coneixement per a la gestió del procés, assegurant la qualitat de l'aigua de sortida tot minimitzant el cost ambiental de la seva explotació. Els sistemes basats en el coneixement es caracteritzen per la seva capacitat de treballar amb dominis molt poc estructurats, i gran part de la informació rellevant de tipus qualitatiu i/o incerta. Precisament aquests són els trets característics que es poden trobar en els sistemes biològics de depuració, i en conseqüència en una EDAR. No obstant, l'elevada complexitat dels SSD fa molt costós el seu disseny, desenvolupament i aplicació en planta real, pel que resulta determinant la generació d'un protocol que faciliti la seva exportació a EDARs de tecnologia similar. L'objectiu del present treball de Tesi és precisament el desenvolupament d'un protocol que faciliti l'exportació sistemàtica de SSD i l'aprofitament del coneixement del procés prèviament adquirit. El treball es desenvolupa en base al cas d'estudi resultant de l'exportació a l'EDAR Montornès del prototipus original de SSD implementat a l'EDAR Granollers. Aquest SSD integra dos tipus de sistemes basats en el coneixement, concretament els sistemes basats en regles (els quals són programes informàtics que emulen el raonament humà i la seva capacitat de solucionar problemes utilitzant les mateixes fonts d'informació) i els sistemes de raonament basats en casos (els quals són programes informàtics basats en el coneixement que volen solucionar les situacions anormals que pateix la planta en el moment actual mitjançant el record de l'acció efectuada en una situació passada similar). El treball està estructurat en diferents capítols, en el primer dels quals, el lector s'introdueix en el món dels sistemes de suport a la decisió i en el domini de la depuració d'aigües. Seguidament es fixen els objectius i es descriuen els materials i mètodes utilitzats. A continuació es presenta el prototipus de SSD desenvolupat per la EDAR Granollers. Una vegada el prototipus ha estat presentat es descriu el primer protocol plantejat pel mateix autor de la Tesi en el seu Treball de Recerca. A continuació es presenten els resultats obtinguts en l'aplicació pràctica del protocol per generar un nou SSD, per una planta depuradora diferent, partint del prototipus. L'aplicació pràctica del protocol permet l'evolució del mateix cap a un millor pla d'exportació. Finalment, es pot concloure que el nou protocol redueix el temps necessari per realitzar el procés d'exportació, tot i que el nombre de passos necessaris ha augmentat, la qual cosa significa que el nou protocol és més sistemàtic.
Resumo:
Dual-system models suggest that English past tense morphology involves two processing routes: rule application for regular verbs and memory retrieval for irregular verbs (Pinker, 1999). In second language (L2) processing research, Ullman (2001a) suggested that both verb types are retrieved from memory, but more recently Clahsen and Felser (2006) and Ullman (2004) argued that past tense rule application can be automatised with experience by L2 learners. To address this controversy, we tested highly proficient Greek-English learners with naturalistic or classroom L2 exposure compared to native English speakers in a self-paced reading task involving past tense forms embedded in plausible sentences. Our results suggest that, irrespective to the type of exposure, proficient L2 learners of extended L2 exposure apply rule-based processing.
Resumo:
This project is concerned with the way that illustrations, photographs, diagrams and graphs, and typographic elements interact to convey ideas on the book page. A framework for graphic description is proposed to elucidate this graphic language of ‘complex texts’. The model is built up from three main areas of study, with reference to a corpus of contemporary children’s science books. First, a historical survey puts the subjects for study in context. Then a multidisciplinary discussion of graphic communication provides a theoretical underpinning for the model; this leads to various proposals, such as the central importance of ratios and relationships among parts in creating meaning in graphic communication. Lastly a series of trials in description contribute to the structure of the model itself. At the heart of the framework is an organising principle that integrates descriptive models from fields of design, literary criticism, art history, and linguistics, among others, as well as novel categories designed specifically for book design. Broadly, design features are described in terms of elemental component parts (micro-level), larger groupings of these (macro-level), and finally in terms of overarching, ‘whole book’ qualities (meta-level). Various features of book design emerge at different levels; for instance, the presence of nested discursive structures, a form of graphic recursion in editorial design, is proposed at the macro-level. Across these three levels are the intersecting categories of ‘rule’ and ‘context’, offering different perspectives with which to describe graphic characteristics. Contextbased features are contingent on social and cultural environment, the reader’s previous knowledge, and the actual conditions of reading; rule-based features relate to the systematic or codified aspects of graphic language. The model aims to be a frame of reference for graphic description, of use in different forms of qualitative or quantitative research and as a heuristic tool in practice and teaching.
Resumo:
Individual differences in cognitive style can be characterized along two dimensions: ‘systemizing’ (S, the drive to analyze or build ‘rule-based’ systems) and ‘empathizing’ (E, the drive to identify another's mental state and respond to this with an appropriate emotion). Discrepancies between these two dimensions in one direction (S > E) or the other (E > S) are associated with sex differences in cognition: on average more males show an S > E cognitive style, while on average more females show an E > S profile. The neurobiological basis of these different profiles remains unknown. Since individuals may be typical or atypical for their sex, it is important to move away from the study of sex differences and towards the study of differences in cognitive style. Using structural magnetic resonance imaging we examined how neuroanatomy varies as a function of the discrepancy between E and S in 88 adult males from the general population. Selecting just males allows us to study discrepant E-S profiles in a pure way, unconfounded by other factors related to sex and gender. An increasing S > E profile was associated with increased gray matter volume in cingulate and dorsal medial prefrontal areas which have been implicated in processes related to cognitive control, monitoring, error detection, and probabilistic inference. An increasing E > S profile was associated with larger hypothalamic and ventral basal ganglia regions which have been implicated in neuroendocrine control, motivation and reward. These results suggest an underlying neuroanatomical basis linked to the discrepancy between these two important dimensions of individual differences in cognitive style.
Resumo:
The addition of small quantities of nanoparticles to conventional and sustainable thermoplastics leads to property enhancements with considerable potential in many areas of applications including food packaging 1, lightweight composites and high performance materials 2. In the case of sustainable polymers 3, the addition of nanoparticles may well sufficiently enhance properties such that the portfolio of possible applications is greatly increased. Most engineered nanoparticles are highly stable and these exist as nanoparticles prior to compounding with the polymer resin. They remain as nanoparticles during the active use of the packaging material as well as in the subsequent waste and recycling streams. It is also possible to construct the nanoparticles within the polymer films during processing from organic compounds selected to present minimal or no potential health hazards 4. In both cases the characterisation of the resultant nanostructured polymers presents a number of challenges. Foremost amongst these are the coupled challenges of the nanoscale of the particles and the low fraction present in the polymer matrix. Very low fractions of nanoparticles are only effective if the dispersion of the particles is good. This continues to be an issue in the process engineering but of course bad dispersion is much easier to see than good dispersion. In this presentation we show the merits of a combined scattering (neutron and x-ray) and microscopy (SEM, TEM, AFM) approach. We explore this methodology using rod like, plate like and spheroidal particles including metallic particles, plate-like and rod-like clay dispersions and nanoscale particles based on carbon such as nanotubes and graphene flakes. We will draw on a range of material systems, many explored in partnership with other members of Napolynet. The value of adding nanoscale particles is that the scale matches the scale of the structure in the polymer matrix. Although this can lead to difficulties in separating the effects in scattering experiments, the result in morphological studies means that both the nanoparticles and the polymer morphology are revealed.
Resumo:
Ensemble learning techniques generate multiple classifiers, so called base classifiers, whose combined classification results are used in order to increase the overall classification accuracy. In most ensemble classifiers the base classifiers are based on the Top Down Induction of Decision Trees (TDIDT) approach. However, an alternative approach for the induction of rule based classifiers is the Prism family of algorithms. Prism algorithms produce modular classification rules that do not necessarily fit into a decision tree structure. Prism classification rulesets achieve a comparable and sometimes higher classification accuracy compared with decision tree classifiers, if the data is noisy and large. Yet Prism still suffers from overfitting on noisy and large datasets. In practice ensemble techniques tend to reduce the overfitting, however there exists no ensemble learner for modular classification rule inducers such as the Prism family of algorithms. This article describes the first development of an ensemble learner based on the Prism family of algorithms in order to enhance Prism’s classification accuracy by reducing overfitting.
Resumo:
This article investigates the determinants of union inclusiveness towards agency workers in Western Europe, using an index which combines unionization rates with dimensions of collective agreements covering agency workers. Using fuzzy-set Qualitative Comparative Analysis, we identify two combinations of conditions leading to inclusiveness: the ‘Northern path’ includes high union density, high bargaining coverage and high union authority, and is consistent with the power resources approach. The ‘Southern path’ combines high union authority, high bargaining coverage, statutory regulations of agency work and working-class orientation, showing that ideology rather than institutional incentives shapes union strategies towards the marginal workforce.
Resumo:
Anti-spoofing is attracting growing interest in biometrics, considering the variety of fake materials and new means to attack biometric recognition systems. New unseen materials continuously challenge state-of-the-art spoofing detectors, suggesting for additional systematic approaches to target anti-spoofing. By incorporating liveness scores into the biometric fusion process, recognition accuracy can be enhanced, but traditional sum-rule based fusion algorithms are known to be highly sensitive to single spoofed instances. This paper investigates 1-median filtering as a spoofing-resistant generalised alternative to the sum-rule targeting the problem of partial multibiometric spoofing where m out of n biometric sources to be combined are attacked. Augmenting previous work, this paper investigates the dynamic detection and rejection of livenessrecognition pair outliers for spoofed samples in true multi-modal configuration with its inherent challenge of normalisation. As a further contribution, bootstrap aggregating (bagging) classifiers for fingerprint spoof-detection algorithm is presented. Experiments on the latest face video databases (Idiap Replay- Attack Database and CASIA Face Anti-Spoofing Database), and fingerprint spoofing database (Fingerprint Liveness Detection Competition 2013) illustrate the efficiency of proposed techniques.
Resumo:
The circumsporozoite protein (CSP) of Plasmodium vivax, a major target for malaria vaccine development, has immunodominant B-cell epitopes mapped to central nonapeptide repeat arrays. To determine whether rearrangements of repeat motifs during mitotic DNA replication of parasites create significant CSP diversity under conditions of low effective meiotic recombination rates, we examined csp alleles from sympatric P. vivax isolates systematically sampled from an area of low malaria endemicity in Brazil over a period of 14 months. Nine unique csp types, comprising six different nona peptide repeats, were observed in 45 isolates analyzed. Identical or nearly identical repeats predominated in most arrays, consistent with their recent expansion. We found strong linkage disequilibrium at sites across the chromosome 8 segment flanking the csp locus, consistent with rare meiotic recombination in this region. We conclude that CSP repeat diversity may not be severely constrained by rare meiotic recombination in areas of low malaria endemicity. New repeat variants may be readily created by nonhomologous recombination even when meiotic recombination is rare, with potential implications for CSP-based vaccine development. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
There is a family of well-known external clustering validity indexes to measure the degree of compatibility or similarity between two hard partitions of a given data set, including partitions with different numbers of categories. A unified, fully equivalent set-theoretic formulation for an important class of such indexes was derived and extended to the fuzzy domain in a previous work by the author [Campello, R.J.G.B., 2007. A fuzzy extension of the Rand index and other related indexes for clustering and classification assessment. Pattern Recognition Lett., 28, 833-841]. However, the proposed fuzzy set-theoretic formulation is not valid as a general approach for comparing two fuzzy partitions of data. Instead, it is an approach for comparing a fuzzy partition against a hard referential partition of the data into mutually disjoint categories. In this paper, generalized external indexes for comparing two data partitions with overlapping categories are introduced. These indexes can be used as general measures for comparing two partitions of the same data set into overlapping categories. An important issue that is seldom touched in the literature is also addressed in the paper, namely, how to compare two partitions of different subsamples of data. A number of pedagogical examples and three simulation experiments are presented and analyzed in details. A review of recent related work compiled from the literature is also provided. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
Delineation of commuting regions has always been based on statistical units, often municipalities or wards. However, using these units has certain disadvantages as their land areas differ considerably. Much information is lost in the larger spatial base units and distortions in self-containment values, the main criterion in rule-based delineation procedures, occur. Alternatively, one can start from relatively small standard size units such as hexagons. In this way, much greater detail in spatial patterns is obtained. In this paper, regions are built by means of intrazonal maximization (Intramax) on the basis of hexagons. The use of geoprocessing tools, specifically developed for the processing ofcommuting data, speeds up processing time considerably. The results of the Intramax analysis are evaluated with travel-to-work area constraints, and comparisons are made with commuting fields, accessibility to employment, commuting flow density and network commuting flow size. From selected steps in the regionalization process, a hierarchy of nested commuting regions emerges, revealing the complexity of commuting patterns.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.