122 resultados para Open Data, Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm: a new way of doing science, oriented towards the openness of every phase of the scientific research process, able to take full advantage of the digital ICTs. The emergence of this paradigm is relatively recent, but in the last years it has become increasingly relevant. The European institutions expressed a clear intention to embrace the Open Science paradigm (eg., think about the European Open Science Cloud, EOSC; or the establishment of the Horizon Europe programme). This dissertation provides a conceptual framework for the multiple interventions of the European institutions in the field of Open Science, addressing the major legal challenges of its implementation. The study investigates the notion of Open Science, proposing a definition that takes into account all its dimensions related to the human and fundamental rights framework in which Open Science is grounded. The inquiry addresses the legal challenges related to the openness of research data, in light of the European Open Data framework and the impact of the GDPR on the context of Open Science. The last part of the study is devoted to the infrastructural dimension of the Open Science paradigm, exploring the e-infrastructures. The focus is on a specific type of computational infrastructure: the High Performance Computing (HPC) facility. The adoption of HPC for research is analysed from the European perspective, investigating the EuroHPC project, and the local perspective, proposing the case study of the HPC facility of the University of Luxembourg, the ULHPC. This dissertation intends to underline the relevance of the legal coordination approach, between all actors and phases of the process, in order to develop and implement the Open Science paradigm, adhering to the underlying human and fundamental rights.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, IoT technology has radically transformed many crucial industrial and service sectors such as healthcare. The multi-facets heterogeneity of the devices and the collected information provides important opportunities to develop innovative systems and services. However, the ubiquitous presence of data silos and the poor semantic interoperability in the IoT landscape constitute a significant obstacle in the pursuit of this goal. Moreover, achieving actionable knowledge from the collected data requires IoT information sources to be analysed using appropriate artificial intelligence techniques such as automated reasoning. In this thesis work, Semantic Web technologies have been investigated as an approach to address both the data integration and reasoning aspect in modern IoT systems. In particular, the contributions presented in this thesis are the following: (1) the IoT Fitness Ontology, an OWL ontology that has been developed in order to overcome the issue of data silos and enable semantic interoperability in the IoT fitness domain; (2) a Linked Open Data web portal for collecting and sharing IoT health datasets with the research community; (3) a novel methodology for embedding knowledge in rule-defined IoT smart home scenarios; and (4) a knowledge-based IoT home automation system that supports a seamless integration of heterogeneous devices and data sources.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Principale obiettivo della ricerca è quello di ricostruire lo stato dell’arte in materia di sanità elettronica e Fascicolo Sanitario Elettronico, con una precipua attenzione ai temi della protezione dei dati personali e dell’interoperabilità. A tal fine sono stati esaminati i documenti, vincolanti e non, dell’Unione europea nonché selezionati progetti europei e nazionali (come “Smart Open Services for European Patients” (EU); “Elektronische Gesundheitsakte” (Austria); “MedCom” (Danimarca); “Infrastruttura tecnologica del Fascicolo Sanitario Elettronico”, “OpenInFSE: Realizzazione di un’infrastruttura operativa a supporto dell’interoperabilità delle soluzioni territoriali di fascicolo sanitario elettronico nel contesto del sistema pubblico di connettività”, “Evoluzione e interoperabilità tecnologica del Fascicolo Sanitario Elettronico”, “IPSE - Sperimentazione di un sistema per l’interoperabilità europea e nazionale delle soluzioni di Fascicolo Sanitario Elettronico: componenti Patient Summary e ePrescription” (Italia)). Le analisi giuridiche e tecniche mostrano il bisogno urgente di definire modelli che incoraggino l’utilizzo di dati sanitari ed implementino strategie effettive per l’utilizzo con finalità secondarie di dati sanitari digitali , come Open Data e Linked Open Data. L’armonizzazione giuridica e tecnologica è vista come aspetto strategico per ridurre i conflitti in materia di protezione di dati personali esistenti nei Paesi membri nonché la mancanza di interoperabilità tra i sistemi informativi europei sui Fascicoli Sanitari Elettronici. A questo scopo sono state individuate tre linee guida: (1) armonizzazione normativa, (2) armonizzazione delle regole, (3) armonizzazione del design dei sistemi informativi. I principi della Privacy by Design (“prottivi” e “win-win”), così come gli standard del Semantic Web, sono considerate chiavi risolutive per il suddetto cambiamento.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Belt and Road Initiative (BRI) is a project launched by the Chinese Government whose main goal is to connect more than 65 countries in Asia, Europe, Africa and Oceania developing infrastructures and facilities. To support the prevention or mitigation of landslide hazards, which may affect the mainland infrastructures of BRI, a landslide susceptibility analysis in the countries involved has been carried out. Due to the large study area, the analysis has been carried out using a multi-scale approach which consists of mapping susceptibility firstly at continental scale, and then at national scale. The study area selected for the continental assessment is the south-Asia, where a pixel-based landslide susceptibility map has been carried out using the Weight of Evidence method and validated by Receiving Operating Characteristic (ROC) curves. Then, we selected the regions of west Tajikistan and north-east India to be investigated at national scale. Data scarcity is a common condition for many countries involved into the Initiative. Therefore in addition to the landslide susceptibility assessment of west Tajikistan, which has been conducted using a Generalized Additive Model and validated by ROC curves, we have examined, in the same study area, the effect of incomplete landslide dataset on the prediction capacity of statistical models. The entire PhD research activity has been conducted using only open data and open-source software. In this context, to support the analysis of the last years an open-source plugin for QGIS has been implemented. The SZ-tool allows the user to make susceptibility assessments from the data preprocessing, susceptibility mapping, to the final classification. All the output data of the analysis conducted are freely available and downloadable. This text describes the research activity of the last three years. Each chapter reports the text of the articles published in international scientific journal during the PhD.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

My doctoral research is about the modelling of symbolism in the cultural heritage domain, and on connecting artworks based on their symbolism through knowledge extraction and representation techniques. In particular, I participated in the design of two ontologies: one models the relationships between a symbol, its symbolic meaning, and the cultural context in which the symbol symbolizes the symbolic meaning; the second models artistic interpretations of a cultural heritage object from an iconographic and iconological (thus also symbolic) perspective. I also converted several sources of unstructured data, a dictionary of symbols and an encyclopaedia of symbolism, and semi-structured data, DBpedia and WordNet, to create HyperReal, the first knowledge graph dedicated to conventional cultural symbolism. By making use of HyperReal's content, I showed how linked open data about cultural symbolism could be utilized to initiate a series of quantitative studies that analyse (i) similarities between cultural contexts based on their symbologies, (ii) broad symbolic associations, (iii) specific case studies of symbolism such as the relationship between symbols, their colours, and their symbolic meanings. Moreover, I developed a system that can infer symbolic, cultural context-dependent interpretations from artworks according to what they depict, envisioning potential use cases for museum curation. I have then re-engineered the iconographic and iconological statements of Wikidata, a widely used general-domain knowledge base, creating ICONdata: an iconographic and iconological knowledge graph. ICONdata was then enriched with automatic symbolic interpretations. Subsequently, I demonstrated the significance of enhancing artwork information through alignment with linked open data related to symbolism, resulting in the discovery of novel connections between artworks. Finally, I contributed to the creation of a software application. This application leverages established connections, allowing users to investigate the symbolic expression of a concept across different cultural contexts through the generation of a three-dimensional exhibition of artefacts symbolising the chosen concept.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

To understand a city and its urban structure it is necessary to study its history. This is feasible through GIS (Geographical Information Systems) and its by-products on the web. Starting from a cartographic view they allow an initial understanding of, and a comparison between, present and past data together with an easy and intuitive access to database information. The research done led to the creation of a GIS for the city of Bologna. It is based on varied data such as historical map, vector and alphanumeric historical data, etc.. After providing information about GIS we thought of spreading and sharing the collected data on the Web after studying two solutions available on the market: Web Mapping and WebGIS. In this study we discuss the stages, beginning with the development of Historical GIS of Bologna, which led to the making of a WebGIS Open Source (MapServer and Chameleon) and the Web Mapping services (Google Earth, Google Maps and OpenLayers).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The contemporary media landscape is characterized by the emergence of hybrid forms of digital communication that contribute to the ongoing redefinition of our societies cultural context. An incontrovertible consequence of this phenomenon is the new public dimension that characterizes the transmission of historical knowledge in the twenty-first century. Awareness of this new epistemic scenario has led us to reflect on the following methodological questions: what strategies should be created to establish a communication system, based on new technology, that is scientifically rigorous, but at the same time engaging for the visitors of museums and Internet users? How does a comparative analysis of ancient documentary sources form a solid base of information for the virtual reconstruction of thirteenth century Bologna in the Metaverse? What benefits can the phenomenon of cross-mediality give to the virtual heritage? The implementation of a new version of the Nu.M.E. project allowed for answering many of these instances. The investigation carried out between 2008 and 2010 has shown that, indeed, real-time 3D graphics and collaborative virtual environments can be feasible tools for representing philologically the urban medieval landscape and for communicating properly validated historical data to the general public. This research is focused on the study and implementation of a pipeline that permits mass communication of historical information about an area of vital importance in late medieval Bologna: Piazza di Porta Ravegnana. The originality of the developed project is not limited solely to the methodological dimension of historical research. Adopted technological perspective is an excellent example of innovation that digital technologies can bring to the cultural heritage. The main result of this research is the creation of Nu.ME 2010, a cross-media system of 3D real-time visualization based on some of the most advanced free software and open source technologies available today free of charge.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis is a collection of essays related to the topic of innovation in the service sector. The choice of this structure is functional to the purpose of single out some of the relevant issues and try to tackle them, revising first the state of the literature and then proposing a way forward. Three relevant issues has been therefore selected: (i) the definition of innovation in the service sector and the connected question of measurement of innovation; (ii) the issue of productivity in services; (iii) the classification of innovative firms in the service sector. Facing the first issue, chapter II shows how the initial width of the original Schumpeterian definition of innovation has been narrowed and then passed to the service sector form the manufacturing one in a reduce technological form. Chapter III tackle the issue of productivity in services, discussing the difficulties for measuring productivity in a context where the output is often immaterial. We reconstruct the dispute on the Baumol’s cost disease argument and propose two different ways to go forward in the research on productivity in services: redefining the output along the line of a characteristic approach; and redefining the inputs, particularly analysing which kind of input it’s worth saving. Chapter IV derives an integrated taxonomy of innovative service and manufacturing firms, using data coming from the 2008 CIS survey for Italy. This taxonomy is based on the enlarged definition of “innovative firm” deriving from the Schumpeterian definition of innovation and classify firms using a cluster analysis techniques. The result is the emergence of a four cluster solution, where firms are differentiated by the breadth of the innovation activities in which they are involved. Chapter 5 reports some of the main conclusions of each singular previous chapter and the points worth of further research in the future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work is part of a project promoted by Emilia-Romagna that aims at encouraging research activities in order to support the innovation strategies of the regional economic system through the exploitation of new data sources. To gain this scope, a database containing administrative data is provided by the Municipality of Bologna. This is achieved by linking data from the Register Office of the Municipality and fiscal data coming from the tax returns submitted to the Revenue Agency and released by the Ministry of Economy and Finance for the period 2002-2017. The main purpose of the project is the analysis of the medium term financial and distributional trends of income of the citizens residing in the Municipality of Bologna. Exploiting this innovative source of data allow us to analyse the dynamics of income at municipal level, overcoming the lack of information in official survey-based statistic. We investigate these trends by building inequality indicators and by examining the persistence of in-work poverty. Our results represent an important informative element to improve the effectiveness and equity of welfare policies at the local level, and to guide the distribution of economic and social support and urban redevelopment interventions in different areas of the Municipality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Per comprende più a fondo il problema che le aziende affrontare per formare le persone in grado di gestire processi di innovazione, in particolare di Open Innovation (OI), è stato realizzato nel 2021 uno studio di caso multiplo di un percorso di educazione non formale all’OI realizzato dalla società consortile ART-ER e rivolto ai dottorandi degli atenei emiliano-romagnoli. Nella seconda fase di tale percorso formativo, per rispondere alle sfide di OI lanciate dalle aziende, sono stati costituiti 4 tavoli di lavoro. A ciascun tavolo di lavoro hanno preso parte 3/4 dottorandi, due referenti aziendali, un consulente e un operatore di ART-ER. Il campione complessivo era costituito da 14 dottorandi; 8 referenti aziendali di quattro aziende; 4 membri di una società di consulenza e 4 operatori della società consortile ART-ER. Il seguente interrogativo di ricerca ha guidato l’indagine: l’interazione tra i soggetti coinvolti in ciascun tavolo di lavoro – considerato un singolo caso - si configura come una Comunità di Pratica in grado di favorire lo sviluppo di apprendimenti individuali funzionali a gestire i processi di OI attivati nelle imprese? I dati sono stati raccolti attraverso una ricerca documentale a tavolino, focus group, interviste semistrutturate e un questionario semistrutturato online. L’analisi dei dati è stata effettuata mediante un’analisi qualitativa del contenuto in più fasi con l’ausilio del software MAXQDA. I risultati dimostrano che in tre casi su quattro, i tavoli di lavoro si sono configurati come una Comunità di Pratica. In questi tre tavoli inoltre è emerso lo sviluppo di alcune aree di competenza funzionali alla gestione dei processi di OI. Nella conclusione sono state presentate alcune proposte per la riprogettazione delle future edizioni del percorso formativo.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this study is to evaluate if spinal cord ischemia (SCI), especially its late presentation, and can be correlated to the results of intraoperative evoked potential monitoring (IOM). Methods. This study is a physician-initiated, retrospective, single-center, non-randomized study. Data from all patients undergoing a thoracoabdominal aortic aneurysm surgical repair (TAAA SR) between January 2016 and March 2020 IOM was collected and analyzed. Results. During the study period, 261 patients underwent TAAA SR with MEP/SSEPs monitoring [190 males, 73%; median age 65 (57-71)]. Thirty-seven patients suffered from SCI, for an overall rate of 14% (permanent 9%). When stratifying patients according to the SCI onset, 18 patients presented with an early (11 permanent) and 19 with a late SCI (<24h) (11 permanent). Of 261 patients undergoing TAAA SR with IOM, 15 were excluded due to changes in the upper extremity motor evoked potentials. For the remaining 246, the association between SCI and IOM was investigated: only irreversible IOM loss without peripheral changes have been found to be a risk factor for late onset SCI (p=.006). Furthermore, given that no statistical differences were found between the two groups when no IOM changes were recorded (p=.679), this situation cannot reliably rule out any SCI in our cohort. Independent risk factors for late spinal cord ischemia onset found at multivariate analysis were smoking history (p=.008), BMI>28 (p=.048) and TAAA extent II (p=.009). The irreversible MEP change without peripheral showed a trend of significance (p=.052). Conclusions. Evoked potential intraoperative monitoring is an important adjunct during thoracoabdominal aortic open repair to predict and possibly prevent spinal cord ischemia. Irreversible IOM loss without peripheral changes was predictive of late SCI, therefore more attention should be paid to the postoperative management of this subgroup of patients.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present Dissertation shows how recent statistical analysis tools and open datasets can be exploited to improve modelling accuracy in two distinct yet interconnected domains of flood hazard (FH) assessment. In the first Part, unsupervised artificial neural networks are employed as regional models for sub-daily rainfall extremes. The models aim to learn a robust relation to estimate locally the parameters of Gumbel distributions of extreme rainfall depths for any sub-daily duration (1-24h). The predictions depend on twenty morphoclimatic descriptors. A large study area in north-central Italy is adopted, where 2238 annual maximum series are available. Validation is performed over an independent set of 100 gauges. Our results show that multivariate ANNs may remarkably improve the estimation of percentiles relative to the benchmark approach from the literature, where Gumbel parameters depend on mean annual precipitation. Finally, we show that the very nature of the proposed ANN models makes them suitable for interpolating predicted sub-daily rainfall quantiles across space and time-aggregation intervals. In the second Part, decision trees are used to combine a selected blend of input geomorphic descriptors for predicting FH. Relative to existing DEM-based approaches, this method is innovative, as it relies on the combination of three characteristics: (1) simple multivariate models, (2) a set of exclusively DEM-based descriptors as input, and (3) an existing FH map as reference information. First, the methods are applied to northern Italy, represented with the MERIT DEM (∼90m resolution), and second, to the whole of Italy, represented with the EU-DEM (25m resolution). The results show that multivariate approaches may (a) significantly enhance flood-prone areas delineation relative to a selected univariate one, (b) provide accurate predictions of expected inundation depths, (c) produce encouraging results in extrapolation, (d) complete the information of imperfect reference maps, and (e) conveniently convert binary maps into continuous representation of FH.