895 resultados para classification and equivalence classes
Injuries of non-lethal child physical abuse to the crania and orofacial regions: a scientific review
Resumo:
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Resumo:
Marine protection has been emphasized through global and European conventions which highlighted the need for the establishment of special areas of conservation. Classification and habitat mapping have been developed to enhance the assessment of marine environment and improve spatial and strategic planning of human activities and to help on the implementation of ecosystem based management. European Nature information System (EUNIS) is a comprehensive habitat classification system to facilitate the harmonised description and collection of habitat and biotopes that has been developed by the European Environment Agency (EEA) in collaboration with experts from institutions throughout Europe.
Resumo:
In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.
Resumo:
Generating sample models for testing a model transformation is no easy task. This paper explores the use of classifying terms and stratified sampling for developing richer test cases for model transformations. Classifying terms are used to define the equivalence classes that characterize the relevant subgroups for the test cases. From each equivalence class of object models, several representative models are chosen depending on the required sample size. We compare our results with test suites developed using random sampling, and conclude that by using an ordered and stratified approach the coverage and effectiveness of the test suite can be significantly improved.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Geociências, 2016.
Resumo:
Support Vector Machines (SVMs) are widely used classifiers for detecting physiological patterns in Human-Computer Interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the application of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables, and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
This CPM project focuses on the document approval process that the Division of State Human Resources consulting team utilizes as it relates to classification and compensation requests, e.g. job reclassifications, PD update requests, and salary requests. The ultimate goal is to become more efficient by utilizing electronic signatures and electronic form filling to streamline the current process of document approvals.
Resumo:
The work of knowledge organization requires a particular set of tools. For instance we need standards of content description like Anglo-American Cataloging Rules Edition 2, Resource Description and Access (RDA), Cataloging Cultural Objects, and Describing Archives: A Content Standard. When we intellectualize the process of knowledge organization – that is when we do basic theoretical research in knowledge organization we need another set of tools. For this latter exercise we need constructs. Constructs are ideas with many conceptual elements, largely considered subjective. They allow us to be inventive as well as allow us to see a particular point of view in knowledge organization. For example, Patrick Wilson’s ideas of exploitative control and descriptive control, or S. R. Ranganathan’s fundamental categories are constructs. They allow us to identify functional requirements or operationalizations of functional requirements, or at least come close to them for our systems and schemes. They also allow us to carry out meaningful evaluation.What is even more interesting, from a research point of view, is that constructs once offered to the community can be contested and reinterpreted and this has an affect on how we view knowledge organization systems and processes. Fundamental categories are again a good example in that some members of the Classification Research Group (CRG) argued against Ranganathan’s point of view. The CRG posited more fundamental categories than Ranganathan’s five, Personality, Matter, Energy, Space, and Time (Ranganathan, 1967). The CRG needed significantly more fundamental categories for their work.1 And these are just two voices in this space we can also consider the fundamental categories of Johannes Kaiser (1911), Shera and Egan, Barbara Kyle (Vickery, 1960), and Eric de Grolier (1962). We can also reference contemporary work that continues comparison and analysis of fundamental categories (e.g., Dousa, 2011).In all these cases we are discussing a construct. The fundamental category is not discovered; it is constructed by a classificationist. This is done because it is useful in engaging in the act of classification. And while we are accustomed to using constructs or debating their merit in one knowledge organization activity or another, we have not analyzed their structure, nor have we created a typology. In an effort to probe the epistemological dimension of knowledge organization, we think it would be a fruitful exercise to do this. This is because we might benefit from clarity around not only our terminology, but the manner in which we talk about our terminology. We are all creative workers examining what is available to us, but doing so through particular lenses (constructs) identifying particular constructs. And by knowing these and being able to refer to these we would consider a core competency for knowledge organization researchers.
Resumo:
El concepto de actividad física es concebido de diferentes formas. Mostrando que existen varios factores que afectan de manera directa e indirecta la percepción que los sujetos construyen entorno a él, generando así una aproximación a diferentes definiciones de la actividad física desde varias perspectivas y dimensiones, donde predomina una noción netamente biológica. Este estudio pretende analizar, como desde las clases sociales se concibe la actividad física en sus conceptos y prácticas considerando los modelos de determinantes y determinación social para la salud. Con fin de comprender como los autores de la literatura científica conciben la actividad física y la relación con las clases sociales, desde una perspectiva teórica de los determinantes sociales de la salud y la teoría de la determinación social, se realizó una revisión documental y análisis de contenido de los conceptos y prácticas de la actividad física que se han considerado en los últimos 10 años. Para ello se seleccionaron las bases de datos PubMed y BVS (Biblioteca Virtual de Salud) por sus énfasis en publicaciones de salud mundialmente. Mostrando que la actividad física es concebida dominantemente desde una perspectiva biológica que ejerce una mirada reduccionista. Las relaciones entre actividad física y las clases sociales están claramente establecidas, sin embargo, estas relaciones pueden discrepar teniendo en cuenta el concepto de clase social, el contexto y la orientación de los autores y las poblaciones objetos de estudio. Obteniendo como resultado que los estudios documentados, revisados y analizados muestran una clara tendencia al modelo de determinantes; no obstante, algunos estudios en sus análisis se orientan hacia el modelo de determinación social. En cuanto al concepto de clases sociales los autores consideran una combinación de factores culturales y económicos sin atreverse a adoptar un concepto específico.
Resumo:
In Portugal, Veterinary Pathology is developing rapidly, and in recent years we assist to the emergence of private laboratories and the restructuring of universities,polytechnics and public laboratories.The Portuguese Society of Animal Pathology,through its actions and its associates has been keeping the discussion among its peers in order to standardizethe criteria of description,classification and evaluation of cases which are the subject of our daily work.One of the last challenges is associated with the use of routine histochemical techniques and immunohistochemistry, in an effort to establish standardized panels for tumour diagnosis, which could eventually reduce each analysis cost.For this purpose a simple survey was built, in which all collaborators answered questions about the markers used for carcinoma, sarcoma and round cell tumour diagnosis, as well as general questions related with the subject. We obtained twenty-one answered to the questions, from public and private laboratories.In general, in most cases immunohistochemical and histochemical methods are used for diagnosis.The wide spectrum cytokeratins are universally used to confirm carcinoma, and vimentin for sarcoma. The CD3 marker is used by all laboratories to identify T lymphocytes. For the diagnosis of B-cell lymphoma, the marker used is not consensual. In each laboratory there are different markers for more specific situations and only two labs perform PCR techniques for diagnosis. These data will be presented to promote extended discussion,namely to reach a consensus when different markers are used.
Resumo:
The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs) that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2) were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax) used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR), and data mining techniques as the Classification And Regression Tree (CART) and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP) reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga
Resumo:
We studied the Paraíba do Sul river watershed , São Paulo state (PSWSP), Southeastern Brazil, in order to assess the land use and cover (LULC) and their implication s to the amount of carbon (C) stored in the forest cover between the years 1985 and 2015. Th e region covers a n area of 1,395,975 ha . We used images made by the Operational Land Imager (OLI) sensor (OLI/Landsat - 8) to produce mappings , and image segmentation techniques to produce vectors with homogeneous characteristics. The training samples and the samples used for classification and validation were collected from the segmented image. To quantify the C stocked in aboveground live biomass (AGLB) , we used an indirect method and applied literature - based reference values. The recovery of 205,690 ha of a secondary Native Forest (NF) after 1985 sequestered 9.7 Tg (Teragram) of C . Considering the whole NF area (455,232 ha), the amount of C accumulated al ong the whole watershed was 3 5 .5 Tg , and the whole Eucalyptus crop (EU) area (113,600 ha) sequester ed 4. 4 Tg of C. Thus, the total amount of C sequestered in the whole watershed (NF + EU) was 3 9 . 9 Tg of C or 1 45 . 6 Tg of CO 2 , and the NF areas were responsible for the large st C stock at the watershed (8 9 %). Therefore , the increase of the NF cover contribut es positively to the reduction of CO 2 concentration in the atmosphere, and Reducing Emissions from Deforestation and Forest Degradation (REDD + ) may become one of the most promising compensation mechanisms for the farmers who increased forest cover at their farms.
Comparison of Explicit and Implicit Methods of Cross-Cultural Learning in an International Classroom
Resumo:
The paper addresses a gap in the literature concerning the difference between enhanced and not enhanced cross-cultural learning in an international classroom. The objective of the described research was to clarify if the environment of international classrooms could enhance cross-cultural competences significantly enough or if additional focus on cross-cultural learning as an explicit objective of learning activities would add substantially to the experience. The research question was defined as “how can a specific exercise focused on cross-cultural learning enhance the cross-cultural skills of university students in an international classroom?”. Surveys were conducted among interna- tional students in three leading Central-European Universities in Lithuania, Poland and Hungary to measure the increase of their cross-cultural competences. The Lithuanian and Polish classes were composed of international students and concentrated on International Management/Business topics (explicit method). The Hungarian survey was done in a general business class that just happened to be international in its composition (implicit method). Overall, our findings prove that the implicit method resulted in comparable, somewhat even stronger effectiveness than the explicit method. The study method included the analyses of students’ individual increases in each study dimension and construction of a compound measure to note the overall results. Our findings confirm the power of the international classroom as a stimulating environment for latent cross-cultural learning even without specific exercises focused on cross-cultural learning itself. However, the specific exercise did induce additional learning, especially related to cross-cultural awareness and communication with representatives of other cultures, even though the extent of that learning may be interpreted as underwhelming. The main conclusion from the study is that the diversity of the students engaged in a project provided an environment that supported cross-cultural learning, even without specific culture-focused reflections or exercises.
Resumo:
The fast development of Information Communication Technologies (ICT) offers new opportunities to realize future smart cities. To understand, manage and forecast the city's behavior, it is necessary the analysis of different kinds of data from the most varied dataset acquisition systems. The aim of this research activity in the framework of Data Science and Complex Systems Physics is to provide stakeholders with new knowledge tools to improve the sustainability of mobility demand in future cities. Under this perspective, the governance of mobility demand generated by large tourist flows is becoming a vital issue for the quality of life in Italian cities' historical centers, which will worsen in the next future due to the continuous globalization process. Another critical theme is sustainable mobility, which aims to reduce private transportation means in the cities and improve multimodal mobility. We analyze the statistical properties of urban mobility of Venice, Rimini, and Bologna by using different datasets provided by companies and local authorities. We develop algorithms and tools for cartography extraction, trips reconstruction, multimodality classification, and mobility simulation. We show the existence of characteristic mobility paths and statistical properties depending on transport means and user's kinds. Finally, we use our results to model and simulate the overall behavior of the cars moving in the Emilia Romagna Region and the pedestrians moving in Venice with software able to replicate in silico the demand for mobility and its dynamic.