967 resultados para Classification models
Resumo:
Ignoring small-scale heterogeneities in Arctic land cover may bias estimates of water, heat and carbon fluxes in large-scale climate and ecosystem models. We investigated subpixel-scale heterogeneity in CHRIS/PROBA and Landsat-7 ETM+ satellite imagery over ice-wedge polygonal tundra in the Lena Delta of Siberia, and the associated implications for evapotranspiration (ET) estimation. Field measurements were combined with aerial and satellite data to link fine-scale (0.3 m resolution) with coarse-scale (upto 30 m resolution) land cover data. A large portion of the total wet tundra (80%) and water body area (30%) appeared in the form of patches less than 0.1 ha in size, which could not be resolved with satellite data. Wet tundra and small water bodies represented about half of the total ET in summer. Their contribution was reduced to 20% in fall, during which ET rates from dry tundra were highest instead. Inclusion of subpixel-scale water bodies increased the total water surface area of the Lena Delta from 13% to 20%. The actual land/water proportions within each composite satellite pixel was best captured with Landsat data using a statistical downscaling approach, which is recommended for reliable large-scale modelling of water, heat and carbon exchange from permafrost landscapes.
Resumo:
The growing interest in quantifying the cultural and creative industries, visualize the economic contribution of activities related to culture demands first of all the construction of internationally comparable analysis frameworks. Currently there are three major bodies which address this issue and whose comparative study is the focus of this article: the UNESCO Framework for Cultural Statistics (FCS-2009), the European Framework for Cultural Statistics (ESSnet-Culture 2012) and the methodological resource of the “Convenio Andrés Bello” group for working with the Satellite Accounts on Culture in Ibero-America (CAB-2015). Cultural sector measurements provide the information necessary for correct planning of cultural policies which in turn leads to sustaining industries and promoting cultural diversity. The text identifies the existing differences in the three models and three levels of analysis, the sectors, the cultural activities and the criteria that each one uses in order to determine the distribution of the activities by sector. The end result leaves the impossibility of comparing cultural statistics of countries that implement different frameworks.
Resumo:
Although ancestral polymorphisms and incomplete lineage sorting are commonly used at the population level, increasing reports of these models have been invoked and tested to explain deep radiations. Hypotheses are put forward for ancestral polymorphisms being the likely reason for paraphyletic taxa at the class level in the diatoms based on an ancient rapid radiation of the entire groups. Models for ancestral deep coalescence are invoked to explain paraphyly and molecular evolution at the class level in the diatoms. Other examples at more recent divergences are also documented. Discussion as to whether or not paraphyletic groups seen in the diatoms at all taxonomic levels should be recognized is provided. The continued use of the terms centric and pennate diatoms is substantiated with additional evidence produced to support their use in diatoms both as descriptive terms for both groups and as taxonomic groups for the latter because new morphological evidence from the auxospores justifies the formal classification of the basal and core araphids as new subclasses of pennate diatoms in the Class Bacillariophyceae. Keys for higher levels of the diatoms showing how the terms centrics and araphid diatoms can be defined are provided.
Resumo:
Although ancestral polymorphisms and incomplete lineage sorting are commonly used at the population level, increasing reports of these models have been invoked and tested to explain deep radiations. Hypotheses are put forward for ancestral polymorphisms being the likely reason for paraphyletic taxa at the class level in the diatoms based on an ancient rapid radiation of the entire groups. Models for ancestral deep coalescence are invoked to explain paraphyly and molecular evolution at the class level in the diatoms. Other examples at more recent divergences are also documented. Discussion as to whether or not paraphyletic groups seen in the diatoms at all taxonomic levels should be recognized is provided. The continued use of the terms centric and pennate diatoms is substantiated with additional evidence produced to support their use in diatoms both as descriptive terms for both groups and as taxonomic groups for the latter because new morphological evidence from the auxospores justifies the formal classification of the basal and core araphids as new subclasses of pennate diatoms in the Class Bacillariophyceae. Keys for higher levels of the diatoms showing how the terms centrics and araphid diatoms can be defined are provided.
Resumo:
The major function of this model is to access the UCI Wisconsin Breast Cancer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classification can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artificial Immune Systems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to problem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifically for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based modelling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environment called AnyLogic, where the immune entities in the DCA are represented by intelligent agents. If this model can be successfully implemented, it makes it possible to implement more complicated and adaptive AIS models in the agent-based simulation environment.
Resumo:
International audience
Resumo:
Part 8: Business Strategies Alignment
Resumo:
Abstract During the last few decades, there has been an increasing international recognition of the studies related to the analysis of the family models change, the focus being the determinants of the female employment and the problems related to the work family balance (Lewis, 2001; Petit & Hook, 2005Saraceno, Crompton & Lyonette, 20062008; Pfau-Effinger, 2012). The majority of these studies have been focused on the analysis of the work-family balance problems as well as the effectiveness of the family and gender policies in order to encourage female employment (Korpi et al., 2013). In Spain, special attention has been given to the family policies implemented, the employability of women and on the role of the father in the family (Flaquer et al., 2015; Meil, 2015); however, there has been far less emphasis on the analysis of the family cultural models (González and Jurado, 2012; Crespi and Moreno, 2016). The purpose of this paper is to present some of the first results on the influence of the socio-demographic factors on the expectations and attitudes about the family models. This study offers an analytical reflection upon the foundation of the determinants of the family ambivalence in Spain from the cultural and the institutional dimension. This study shows the Spanish family models of preferences following the Pfau-Effinger (2004) classification of the famiy living arrangements. The reason for this study is twofold; on the one hand, there is confirmed the scarcity of studies that have focused their attention on this objective in Spain; on the other hand, the studies carried out in the international context have confirmed the analytical effectiveness of researching on the attitude and value changes to explain the meaning and trends of the family changes. There is also presented some preliminary results that have been obtained from the multinomial analysis related to the influence of the socio-demographic factors on the family model chosen by the individuals in Spain (father and mother working full time; mother part-time father full-time; mother not at work father full-time; mother and father part-time). 3 The database used has been the International Social Survey Programme: Family and Changing Gender Roles IV- ISSP 2012-. Spain is the only country of South Europe that has participated in the survey. For this reason it has been considered as a representative case study.
Resumo:
We study the problem of detecting sentences describing adverse drug reactions (ADRs) and frame the problem as binary classification. We investigate different neural network (NN) architectures for ADR classification. In particular, we propose two new neural network models, Convolutional Recurrent Neural Network (CRNN) by concatenating convolutional neural networks with recurrent neural networks, and Convolutional Neural Network with Attention (CNNA) by adding attention weights into convolutional neural networks. We evaluate various NN architectures on a Twitter dataset containing informal language and an Adverse Drug Effects (ADE) dataset constructed by sampling from MEDLINE case reports. Experimental results show that all the NN architectures outperform the traditional maximum entropy classifiers trained from n-grams with different weighting strategies considerably on both datasets. On the Twitter dataset, all the NN architectures perform similarly. But on the ADE dataset, CNN performs better than other more complex CNN variants. Nevertheless, CNNA allows the visualisation of attention weights of words when making classification decisions and hence is more appropriate for the extraction of word subsequences describing ADRs.
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
As the universe of knowledge and subjects change over time, indexing languages like classification schemes, accommodate that change by restructuring. Restructuring indexing languages affects indexer and cataloguer work. Subjects may split or lump together. They may disappear only to reappear later. And new subjects may emerge that were assumed to be already present, but not clearly articulated (Miksa, 1998). In this context we have the complex relationship between the indexing language, the text being described, and the already described collection (Tennis, 2007). It is possible to imagine indexers placing a document into an outdated class, because it is the one they have already used for their collection. However, doing this erases the semantics in the present indexing language. Given this range of choice in the context of indexing language change, the question arises, what does this look like in practice? How often does this occur? Further, what does this phenomenon tell us about subjects in indexing languages? Does the practice we observe in the reaction to indexing language change provide us evidence of conceptual models of subjects and subject creation? If it is incomplete, but gets us close, what evidence do we still require?
Resumo:
Models based on species distributions are widely used and serve important purposes in ecology, biogeography and conservation. Their continuous predictions of environmental suitability are commonly converted into a binary classification of predicted (or potential) presences and absences, whose accuracy is then evaluated through a number of measures that have been the subject of recent reviews. We propose four additional measures that analyse observation-prediction mismatch from a different angle – namely, from the perspective of the predicted rather than the observed area – and add to the existing toolset of model evaluation methods. We explain how these measures can complete the view provided by the existing measures, allowing further insights into distribution model predictions. We also describe how they can be particularly useful when using models to forecast the spread of diseases or of invasive species and to predict modifications in species’ distributions under climate and land-use change
Resumo:
The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs) that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2) were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax) used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR), and data mining techniques as the Classification And Regression Tree (CART) and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP) reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga
Resumo:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Diffusive models and chaos indicators for non-linear betatron motion in circular hadron accelerators
Resumo:
Understanding the complex dynamics of beam-halo formation and evolution in circular particle accelerators is crucial for the design of current and future rings, particularly those utilizing superconducting magnets such as the CERN Large Hadron Collider (LHC), its luminosity upgrade HL-LHC, and the proposed Future Circular Hadron Collider (FCC-hh). A recent diffusive framework, which describes the evolution of the beam distribution by means of a Fokker-Planck equation, with diffusion coefficient derived from the Nekhoroshev theorem, has been proposed to describe the long-term behaviour of beam dynamics and particle losses. In this thesis, we discuss the theoretical foundations of this framework, and propose the implementation of an original measurement protocol based on collimator scans in view of measuring the Nekhoroshev-like diffusive coefficient by means of beam loss data. The available LHC collimator scan data, unfortunately collected without the proposed measurement protocol, have been successfully analysed using the proposed framework. This approach is also applied to datasets from detailed measurements of the impact on the beam losses of so-called long-range beam-beam compensators also at the LHC. Furthermore, dynamic indicators have been studied as a tool for exploring the phase-space properties of realistic accelerator lattices in single-particle tracking simulations. By first examining the classification performance of known and new indicators in detecting the chaotic character of initial conditions for a modulated Hénon map and then applying this knowledge to study the properties of realistic accelerator lattices, we tried to identify a connection between the presence of chaotic regions in the phase space and Nekhoroshev-like diffusive behaviour, providing new tools to the accelerator physics community.