25 resultados para Innovative learning and tools

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heavy Liquid Metal Cooled Reactors are among the concepts, fostered by the GIF, as potentially able to comply with stringent safety, economical, sustainability, proliferation resistance and physical protection requirements. The increasing interest around these innovative systems has highlighted the lack of tools specifically dedicated to their core design stage. The present PhD thesis summarizes the three years effort of, partially, closing the mentioned gap, by rationally defining the role of codes in core design and by creating a development methodology for core design-oriented codes (DOCs) and its subsequent application to the most needed design areas. The covered fields are, in particular, the fuel assembly thermal-hydraulics and the fuel pin thermo-mechanics. Regarding the former, following the established methodology, the sub-channel code ANTEO+ has been conceived. Initially restricted to the forced convection regime and subsequently extended to the mixed one, ANTEO+, via a thorough validation campaign, has been demonstrated a reliable tool for design applications. Concerning the fuel pin thermo-mechanics, the will to include safety-related considerations at the outset of the pin dimensioning process, has given birth to the safety-informed DOC TEMIDE. The proposed DOC development methodology has also been applied to TEMIDE; given the complex interdependence patterns among the numerous phenomena involved in an irradiated fuel pin, to optimize the code final structure, a sensitivity analysis has been performed, in the anticipated application domain. The development methodology has also been tested in the verification and validation phases; the latter, due to the low availability of experiments truly representative of TEMIDE's application domain, has only been a preliminary attempt to test TEMIDE's capabilities in fulfilling the DOC requirements upon which it has been built. In general, the capability of the proposed development methodology for DOCs in delivering tools helping the core designer in preliminary setting the system configuration has been proven.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of Next Generation Sequencing promotes Biology in the Big Data era. The ever-increasing gap between proteins with known sequences and those with a complete functional annotation requires computational methods for automatic structure and functional annotation. My research has been focusing on proteins and led so far to the development of three novel tools, DeepREx, E-SNPs&GO and ISPRED-SEQ, based on Machine and Deep Learning approaches. DeepREx computes the solvent exposure of residues in a protein chain. This problem is relevant for the definition of structural constraints regarding the possible folding of the protein. DeepREx exploits Long Short-Term Memory layers to capture residue-level interactions between positions distant in the sequence, achieving state-of-the-art performances. With DeepRex, I conducted a large-scale analysis investigating the relationship between solvent exposure of a residue and its probability to be pathogenic upon mutation. E-SNPs&GO predicts the pathogenicity of a Single Residue Variation. Variations occurring on a protein sequence can have different effects, possibly leading to the onset of diseases. E-SNPs&GO exploits protein embeddings generated by two novel Protein Language Models (PLMs), as well as a new way of representing functional information coming from the Gene Ontology. The method achieves state-of-the-art performances and is extremely time-efficient when compared to traditional approaches. ISPRED-SEQ predicts the presence of Protein-Protein Interaction sites in a protein sequence. Knowing how a protein interacts with other molecules is crucial for accurate functional characterization. ISPRED-SEQ exploits a convolutional layer to parse local context after embedding the protein sequence with two novel PLMs, greatly surpassing the current state-of-the-art. All methods are published in international journals and are available as user-friendly web servers. They have been developed keeping in mind standard guidelines for FAIRness (FAIR: Findable, Accessible, Interoperable, Reusable) and are integrated into the public collection of tools provided by ELIXIR, the European infrastructure for Bioinformatics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The world currently faces a paradox in terms of accessibility for people with disabilities. While digital technologies hold immense potential to improve their quality of life, the majority of web content still exhibits critical accessibility issues. This PhD thesis addresses this challenge by proposing two interconnected research branches. The first introduces a groundbreaking approach to improving web accessibility by rethinking how it is approached, making it more accessible itself. It involves the development of: 1. AX, a declarative framework of web components that enforces the generation of accessible markup by means of static analysis. 2. An innovative accessibility testing and evaluation methodology, which communicates test results by exploiting concepts that developers are already familiar with (visual rendering and mouse operability) to convey the accessibility of a page. This methodology is implemented through the SAHARIAN browser extension. 3. A11A, a categorized and structured collection of curated accessibility resources aimed at facilitating their intended audiences discover and use them. The second branch focuses on unleashing the full potential of digital technologies to improve accessibility in the physical world. The thesis proposes the SCAMP methodology to make scientific artifacts accessible to blind, visually impaired individuals, and the general public. It enhances the natural characteristics of objects, making them more accessible through interactive, multimodal, and multisensory experiences. Additionally, the prototype of \gls{a11yvt}, a system supporting accessible virtual tours, is presented. It provides blind and visually impaired individuals with features necessary to explore unfamiliar indoor environments, while maintaining universal design principles that makes it suitable for usage by the general public. The thesis extensively discusses the theoretical foundations, design, development, and unique characteristics of these innovative tools. Usability tests with the intended target audiences demonstrate the effectiveness of the proposed artifacts, suggesting their potential to significantly improve the current state of accessibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My research PhD work is focused on the Electrochemically Generated Luminescence (ECL) investigation of several different homogeneous and heterogeneous systems. ECL is a redox induced emission, a process whereby species, generated at electrodes, undergo a high-energy electron transfer reaction to form excited states that emit light. Since its first application, the ECL technique has become a very powerful analytical tool and has widely been used in biosensor transduction. ECL presents an intrinsically low noise and high sensitivity; moreover, the electrochemical generation of the excited state prevents scattering of the light source: for all these characteristics, it is an elective technique for ultrasensitive immunoassay detection. The majority of ECL systems involve species in solution where the emission occurs in the diffusion layer near to the electrode surface. However, over the past few years, an intense research has been focused on the ECL generated from species constrained on the electrode surface. The aim of my work is to study the behavior of ECL-generating molecular systems upon the progressive increase of their spatial constraints, that is, passing from isolated species in solution, to fluorophores embedded within a polymeric film and, finally, to patterned surfaces bearing “one-dimensional” emitting spots. In order to describe these trends, I use different “dimensions” to indicate the different classes of compounds. My thesis was mostly developed in the electrochemistry group of Bologna with the supervision of Prof Francesco Paolucci and Dr Massimo Marcaccio. With their help and also thanks to their long experience in the molecular and supramolecular ECL fields and in the surface investigations using scanning probe microscopy techniques, I was able to obtain the results herein described. Moreover, during my research work, I have established a new collaboration with the group of Nanobiotechnology of Prof. Robert Forster (Dublin City University) where I spent a research period. Prof. Forster has a broad experience in the biomedical field, especially he focuses his research on film surfaces biosensor based on the ECL transduction. This thesis can be divided into three sections described as follows: (i) in the fist section, homogeneous molecular and supramolecular ECL-active systems, either organic or inorganic species (i.e., corannulene, dendrimers and iridium metal complex), are described. Driving force for this kind of studies includes the search for new luminophores that display on one hand higher ECL efficiencies and on the other simple mechanisms for modulating intensity and energy of their emission in view of their effective use in bioconjugation applications. (ii) in the second section, the investigation of some heterogeneous ECL systems is reported. Redox polymers comprising inorganic luminophores were described. In such a context, a new conducting platform, based on carbon nanotubes, was developed aimed to accomplish both the binding of a biological molecule and its electronic wiring to the electrode. This is an essential step for the ECL application in the field of biosensors. (iii) in the third section, different patterns were produced on the electrode surface using a Scanning Electrochemical Microscopy. I developed a new methods for locally functionalizing an inert surface and reacting this surface with a luminescent probe. In this way, I successfully obtained a locally ECL active platform for multi-array application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis was to investigate the respective contribution of prior information and sensorimotor constraints to action understanding, and to estimate their consequences on the evolution of human social learning. Even though a huge amount of literature is dedicated to the study of action understanding and its role in social learning, these issues are still largely debated. Here, I critically describe two main perspectives. The first perspective interprets faithful social learning as an outcome of a fine-grained representation of others’ actions and intentions that requires sophisticated socio-cognitive skills. In contrast, the second perspective highlights the role of simpler decision heuristics, the recruitment of which is determined by individual and ecological constraints. The present thesis aims to show, through four experimental works, that these two contributions are not mutually exclusive. A first study investigates the role of the inferior frontal cortex (IFC), the anterior intraparietal area (AIP) and the primary somatosensory cortex (S1) in the recognition of other people’s actions, using a transcranial magnetic stimulation adaptation paradigm (TMSA). The second work studies whether, and how, higher-order and lower-order prior information (acquired from the probabilistic sampling of past events vs. derived from an estimation of biomechanical constraints of observed actions) interacts during the prediction of other people’s intentions. Using a single-pulse TMS procedure, the third study investigates whether the interaction between these two classes of priors modulates the motor system activity. The fourth study tests the extent to which behavioral and ecological constraints influence the emergence of faithful social learning strategies at a population level. The collected data contribute to elucidate how higher-order and lower-order prior expectations interact during action prediction, and clarify the neural mechanisms underlying such interaction. Finally, these works provide/open promising perspectives for a better understanding of social learning, with possible extensions to animal models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present PhD thesis exploits the design skills I have been improving since my master thesis’ research. A brief description of the chapters’ content follows. Chapter 1: the simulation of a complete front–end is a very complex problem and, in particular, is the basis upon which the prediction of the overall performance of the system is possible. By means of a commercial EM simulation tool and a rigorous nonlinear/EM circuit co–simulation based on the Reciprocity Theorem, the above–mentioned prediction can be achieved and exploited for wireless links characterization. This will represent the theoretical basics of the entire present thesis and will be supported by two RF applications. Chapter 2: an extensive dissertation about Magneto–Dielectric (MD) materials will be presented, together with their peculiar characteristics as substrates for antenna miniaturization purposes. A designed and tested device for RF on–body applications will be described in detail. Finally, future research will be discussed. Chapter 3: this chapter will deal with the issue regarding the exploitation of renewable energy sources for low–energy consumption devices. Hence the problem related to the so–called energy harvesting will be tackled and a first attempt to deploy THz solar energy in an innovative way will be presented and discussed. Future research will be proposed as well. Chapter 4: graphene is a very promising material for devices to be exploited in the RF and THz frequency range for a wide range of engineering applications, including those ones marked as the main research goal of the present thesis. This chapter will present the results obtained during my research period at the National Institute for Research and Development in Microtechnologies (IMT) in Bucharest, Romania. It will concern the design and manufacturing of antennas and diodes made in graphene–based technology for detection/rectification purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With this work I elucidated new and unexpected mechanisms of two strong and highly specific transcription inhibitors: Triptolide and Campthotecin. Triptolide (TPL) is a diterpene epoxide derived from the Chinese plant Trypterigium Wilfoordii Hook F. TPL inhibits the ATPase activity of XPB, a subunit of the general transcription factor TFIIH. In this thesis I found that degradation of Rbp1 (the largest subunit of RNA Polymerase II) caused by TPL treatments, is preceded by an hyperphosphorylation event at serine 5 of the carboxy-terminal domain (CTD) of Rbp1. This event is concomitant with a block of RNA Polymerase II at promoters of active genes. The enzyme responsible for Ser5 hyperphosphorylation event is CDK7. Notably, CDK7 downregulation rescued both Ser5 hyperphosphorylation and Rbp1 degradation triggered by TPL. Camptothecin (CPT), derived from the plant Camptotheca acuminata, specifically inhibits topoisomerase 1 (Top1). We first found that CPT induced antisense transcription at divergent CpG islands promoter. Interestingly, by immunofluorescence experiments, CPT was found to induce a burst of R loop structures (DNA/RNA hybrids) at nucleoli and mitochondria. We then decided to investigate the role of Top1 in R loop homeostasis through a short interfering RNA approach (RNAi). Using DNA/RNA immunoprecipitation techniques coupled to NGS I found that Top1 depletion induces an increase of R loops at a genome-wide level. We found that such increase occurs on the entire gene body. At a subset of loci R loops resulted particularly stressed after Top1 depletion: some of these genes showed the formation of new R loops structures, whereas other loci showed a reduction of R loops. Interestingly we found that new peaks usually appear at tandem or divergent genes in the entire gene body, while losses of R loop peaks seems to be a feature specific of 3’ end regions of convergent genes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation looks at three widely accepted assumptions about how the patent system works: patent documents disclose inventions; this disclosure happens quickly, and patent owners are able to enforce patents. The first chapter estimates the effect of stronger trade secret protection on the number of patented innovations. When firms find it easier to protect business information, there is less need for patent protection, and accordingly less need for the disclosure of technical information that is required by patent law. The novel finding is that when it is easier to keep innovations, there is not only a reduction in the number of patents but also a sizeable reduction in disclosed knowledge per patent. The chapter then shows how this endogeneity of the amount of knowledge per patent can affect the measurement of innovation using patent data. The second chapter develops a game-theoretic model to study how the introduction of fee-shifting in US patent litigation would influence firms’ patenting propensities. When the defeated party to a lawsuit has to bear not only their own cost but also the legal expenditure of the winning party, manufacturing firms in the model unambiguously reduce patenting, with small firms affected the most. For fee-shifting to have the same effect as in Europe, the US legal system would require shifting of a much smaller share of fees. Lessons from European patent litigation may, therefore, have only limited applicability in the US case. The third chapter contains a theoretical analysis of the influence of delayed disclosure of patent applications by the patent office. Such a delay is a feature of most patent systems around the world but has so far not attracted analytical scrutiny. This delay may give firms various kinds of strategic (non-)disclosure incentives when they are competing for more than a single innovation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Changing or creating an organisation means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Managing the risks implies proposing changes to the processes that allow the desired result: an optimised process. In order to manage a company and optimise it in the best possible way, not only should the organisational aspect, risk management and legal compliance be taken into account, but it is important that they are all analysed simultaneously with the aim of finding the right balance that satisfies them all. This is the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimisation, ICT support is used. This work isn’t a thesis in computer science or law, but rather an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analysed separately, which however have an impact and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, the case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalised to their home. This provide the possibility to perform experiments using real hospital database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation contributes to the scholarly debate on temporary teams by exploring team interactions and boundaries.The fundamental challenge in temporary teams originates from temporary participation in the teams. First, as participants join the team for a short period of time, there is not enough time to build trust, share understanding, and have effective interactions. Consequently, team outputs and practices built on team interactions become vulnerable. Secondly, as team participants move on and off the teams, teams’ boundaries become blurred over time. It leads to uncertainty among team participants and leaders about who is/is not identified as a team member causing collective disagreement within the team. Focusing on the above mentioned challenges, we conducted this research in healthcare organisations since the use of temporary teams in healthcare and hospital setting is prevalent. In particular, we focused on orthopaedic teams that provide personalised treatments for patients using 3D printing technology. Qualitative and quantitative data were collected using interviews, observations, questionnaires and archival data at Rizzoli Orthopaedic Institute, Bologna, Italy. This study provides the following research outputs. The first is a conceptual study that explores temporary teams’ literature using bibliometric analysis and systematic literature review to highlight research gaps. The second paper qualitatively studies temporary relationships within the teams by collecting data using group interviews and observations. The results highlighted the role of short-term dyadic relationships as a ground to share and transfer knowledge at the team level. Moreover, hierarchical structure of the teams facilitates knowledge sharing by supporting dyadic relationships within and beyond the team meetings. The third paper investigates impact of blurred boundaries on temporary teams’ performance. Using quantitative data collected through questionnaires and archival data, we concluded that boundary blurring in terms of fluidity, overlap and dispersion differently impacts team performance at high and low levels of task complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The challenges of the current global food systems are often framed around feeding the world's growing population while meeting sustainable development for future generations. Globalization has brought to a fragmentation of food spaces, leading to a flexible and mutable supply chain. This poses a major challenge to food and nutrition security, affecting also rural-urban dynamics in territories. Furthermore, the recent crises have highlighted the vulnerability to shocks and disruptions of the food systems and the eco-system due to the intensive management of natural, human and economic capital. Hence, a sustainable and resilient transition of the food systems is required through a multi-faceted approach that tackles the causes of unsustainability and promotes sustainable practices at all levels of the food system. In this respect, a territorial approach becomes a relevant entry point of analysis for the food system’s multifunctionality and can support the evaluation of sustainability by quantifying impacts associated with quantitative methods and understanding the territorial responsibility of different actors with qualitative ones. Against this background the present research aims to i) investigate the environmental, costing and social indicators suitable for a scoring system able to measure the integrated sustainability performance of food initiatives within the City/Region territorial context; ii) develop a territorial assessment framework to measure sustainability impacts of agricultural systems; and iii) define an integrated methodology to match production and consumption at a territorial level to foster a long-term vision of short food supply chains. From a methodological perspective, the research proposes a mixed quantitative and qualitative research method. The outcomes provide an in-depth view into the environmental and socio-economic impacts of food systems at the territorial level, investigating possible indicators, frameworks, and business strategies to foster their future sustainable development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Creativity seems mysterious; when we experience a creative spark, it is difficult to explain how we got that idea, and we often recall notions like ``inspiration" and ``intuition" when we try to explain the phenomenon. The fact that we are clueless about how a creative idea manifests itself does not necessarily imply that a scientific explanation cannot exist. We are unaware of how we perform certain tasks, such as biking or language understanding, but we have more and more computational techniques that can replicate and hopefully explain such activities. We should understand that every creative act is a fruit of experience, society, and culture. Nothing comes from nothing. Novel ideas are never utterly new; they stem from representations that are already in mind. Creativity involves establishing new relations between pieces of information we had already: then, the greater the knowledge, the greater the possibility of finding uncommon connections, and the more the potential to be creative. In this vein, a beneficial approach to a better understanding of creativity must include computational or mechanistic accounts of such inner procedures and the formation of the knowledge that enables such connections. That is the aim of Computational Creativity: to develop computational systems for emulating and studying creativity. Hence, this dissertation focuses on these two related research areas: discussing computational mechanisms to generate creative artifacts and describing some implicit cognitive processes that can form the basis for creative thoughts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Red flesh fruit is a character which interest is increasing in several commercial species. Following a review of the research on the biosynthesis and accumulation of anthocyanin in pears (Chapter 1) the general aim of the project is reported in Chapter 2. Chapter 3 reports the results of a molecular analysis of 33 red-fleshed pear accessions, genotyped with 18 SSR markers with the aim of improving germplasm conservation strategies to support ongoing breeding programs. The molecular profiles revealed both cases of synonymy and homonymy and 6 unique genotypes were identified. The S-allele were established to highlight the genetic relationships among these landraces. Four of the unique genotypes have been clustered based on pomological data. In the Chapter 4, the work was directed to identify the putative genomic regions involved in the appearance of this character in pear fruit. A crossing population (‘Carmen’ x ‘Cocomerina Precoce’) segregating for the trait was phenotyped for 2 consecutive years and used for QTL analysis. A strong QTL was identified in a small genomic region related to the red flesh fruit trait at 27 Mb from the start of LG5. Two candidate genes were detected in this genomic region: ‘PcMYB114’ and ‘PcABCC2’. SSR marker SSR114 was found able to detect the red flesh phenotype segregation in all the red-fleshed pear accessions and segregating progenies tested. Chapter 5 focuses on examining the trend of anthocyanin synthesis and accumulation during the fruit development, from fruit set to ripening time. Three different trials were planned: qPCR and HPLC methods were performed to correlate the genes expression with the anthocyanin accumulation in ‘Cocomerina Precoce’ and six progenies. Total transcriptome sequencing was used to compare the differential genes expression between red and white-fleshed fruit. Chapter 6 reviews and analyses all the earlier study findings while providing new potential future perspectives.