3 resultados para DATA as Art : ART as Data
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.
Resumo:
Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).
Resumo:
The astrophysical context in which this thesis project lies concerns the comprehension of the mutual interaction between the accretion onto a Super Massive Black Hole (SMBH) and the Star Formation (SF), that take place in the host galaxy. This is one of the key topic of the modern extragalactic astrophysical research. Indeed, it is widely accepted that to understand the physics of a galaxy, the contribution of a possible central AGN must be taken into account. The aim of this thesis is the study of the physical processes of the nearby Seyfert galaxy NGC 34. This source was selected because of the wide collection of multiwavelength data available in the literature. In addition, recently, it has been observed with the Atacama Large Submillimeter/Millimeter Array (ALMA) in Band 9. This project is divided in two main parts: first of all, we reduced and analyzed the ALMA data, obtaining the continuum and CO(6-5) maps; then, we looked for a coherent explaination of NGC 34 physical characteristics. In particular, we focused on the ISM physics, in order to understand its properties in terms of density, chemical composition and dominant radiation field (SF or accretion). This work has been done through the analysis of the spectral distribution of several CO transitions as a function of the transition number (CO SLED), obtained joining the CO(6-5) line with other transitions available in the literature. More precisely, the observed CO SLED has been compared with ISM models, including Photo-Dissociation Regions (PDRs) and X-ray-Dominated Regions (XDRs). These models have been obtained through the state-of-the-art photoionization code CLOUDY. Along with the observed CO SLED, we have taken into account other physical properties of NGC 34, such as the Star Formation Rate (SFR), the gas mass and the X-ray luminosity.