4 resultados para transmission of data and images
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.
Resumo:
Questa tesi si propone di innovare lo stato dell’arte dei metodi di analisi dell’eterogeneità in lesioni polmonari attualmente utilizzati, affiancando l’analisi funzionale (emodinamica) a quella morfologica, grazie allo sviluppo di nuove feature specifiche. Grazie alla collaborazione tra il Computer Vision Group (CVG) dell’Università di Bologna e l’Unità Operativa di Radiologia dell’IRCCS-IRST di Meldola (Istituto di Ricovero e Cura a Carattere Scientifico – Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori), è stato possibile analizzare un adeguato numero di casi reali di pazienti affetti da lesioni polmonari primitive, effettuando un’analisi dell’eterogeneità sia su sequenze di immagini TC baseline sia contrast-enhanced, consentendo quindi un confronto tra eterogeneità morfologica e funzionale. I risultati ottenuti sono infine discussi sulla base del confronto con le considerazioni di natura clinica effettuate in cieco da due esperti radiologi dell’IRCCS-IRST.
Resumo:
The present work studies a km-scale data assimilation scheme based on a LETKF developed for the COSMO model. The aim is to evaluate the impact of the assimilation of two different types of data: temperature, humidity, pressure and wind data from conventional networks (SYNOP, TEMP, AIREP reports) and 3d reflectivity from radar volume. A 3-hourly continuous assimilation cycle has been implemented over an Italian domain, based on a 20 member ensemble, with boundary conditions provided from ECMWF ENS. Three different experiments have been run for evaluating the performance of the assimilation on one week in October 2014 during which Genova flood and Parma flood took place: a control run of the data assimilation cycle with assimilation of data from conventional networks only, a second run in which the SPPT scheme is activated into the COSMO model, a third run in which also reflectivity volumes from meteorological radar are assimilated. Objective evaluation of the experiments has been carried out both on case studies and on the entire week: check of the analysis increments, computing the Desroziers statistics for SYNOP, TEMP, AIREP and RADAR, over the Italian domain, verification of the analyses against data not assimilated (temperature at the lowest model level objectively verified against SYNOP data), and objective verification of the deterministic forecasts initialised with the KENDA analyses for each of the three experiments.
Resumo:
Over the past twenty years, new technologies have required an increasing use of mathematical models in order to understand better the structural behavior: finite element method is the one mostly used. However, the reliability of this method applied to different situations has to be tried each time. Since it is not possible to completely model the reality, different hypothesis must be done: these are the main problems of FE modeling. The following work deals with this problem and tries to figure out a way to identify some of the unknown main parameters of a structure. This main research focuses on a particular path of study and development, but the same concepts can be applied to other objects of research. The main purpose of this work is the identification of unknown boundary conditions of a bridge pier using the data acquired experimentally with field tests and a FEM modal updating process. This work doesn’t want to be new, neither innovative. A lot of work has been done during the past years on this main problem and many solutions have been shown and published. This thesis just want to rework some of the main aspects of the structural optimization process, using a real structure as fitting model.