743 resultados para Inter-project Learning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reinforcement learning is a particular paradigm of machine learning that, recently, has proved times and times again to be a very effective and powerful approach. On the other hand, cryptography usually takes the opposite direction. While machine learning aims at analyzing data, cryptography aims at maintaining its privacy by hiding such data. However, the two techniques can be jointly used to create privacy preserving models, able to make inferences on the data without leaking sensitive information. Despite the numerous amount of studies performed on machine learning and cryptography, reinforcement learning in particular has never been applied to such cases before. Being able to successfully make use of reinforcement learning in an encrypted scenario would allow us to create an agent that efficiently controls a system without providing it with full knowledge of the environment it is operating in, leading the way to many possible use cases. Therefore, we have decided to apply the reinforcement learning paradigm to encrypted data. In this project we have applied one of the most well-known reinforcement learning algorithms, called Deep Q-Learning, to simple simulated environments and studied how the encryption affects the training performance of the agent, in order to see if it is still able to learn how to behave even when the input data is no longer readable by humans. The results of this work highlight that the agent is still able to learn with no issues whatsoever in small state spaces with non-secure encryptions, like AES in ECB mode. For fixed environments, it is also able to reach a suboptimal solution even in the presence of secure modes, like AES in CBC mode, showing a significant improvement with respect to a random agent; however, its ability to generalize in stochastic environments or big state spaces suffers greatly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis project is to automatically localize HCC tumors in the human liver and subsequently predict if the tumor will undergo microvascular infiltration (MVI), the initial stage of metastasis development. The input data for the work have been partially supplied by Sant'Orsola Hospital and partially downloaded from online medical databases. Two Unet models have been implemented for the automatic segmentation of the livers and the HCC malignancies within it. The segmentation models have been evaluated with the Intersection-over-Union and the Dice Coefficient metrics. The outcomes obtained for the liver automatic segmentation are quite good (IOU = 0.82; DC = 0.35); the outcomes obtained for the tumor automatic segmentation (IOU = 0.35; DC = 0.46) are, instead, affected by some limitations: it can be state that the algorithm is almost always able to detect the location of the tumor, but it tends to underestimate its dimensions. The purpose is to achieve the CT images of the HCC tumors, necessary for features extraction. The 14 Haralick features calculated from the 3D-GLCM, the 120 Radiomic features and the patients' clinical information are collected to build a dataset of 153 features. Now, the goal is to build a model able to discriminate, based on the features given, the tumors that will undergo MVI and those that will not. This task can be seen as a classification problem: each tumor needs to be classified either as “MVI positive” or “MVI negative”. Techniques for features selection are implemented to identify the most descriptive features for the problem at hand and then, a set of classification models are trained and compared. Among all, the models with the best performances (around 80-84% ± 8-15%) result to be the XGBoost Classifier, the SDG Classifier and the Logist Regression models (without penalization and with Lasso, Ridge or Elastic Net penalization).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent years have seen a focus on responding to student expectations in higher education. As a result, a number of technology-enhanced learning (TEL) policies have stipulated a requirement for a minimum virtual learning environment (VLE) standard to provide a consistent student experience. This paper offers insight into an under-researched area of such a VLE standard policy development using a case study of one university. With reference to the implementation staircase model, this study takes cue from the view that an institutional VLE template can affect lower levels directly, sidestepping the chain in the implementation staircase. The Group's activity whose remit is to design and develop a VLE template, therefore, becomes significant. The study, drawing on activity theory, explores the mediating role of such a Group. Factors of success and sources of tension are analysed to understand the interaction between the individuals and the collective agency of Group members. The paper identifies implications to practice for similar TEL development projects. Success factors identified demonstrated the importance of good project management principles, establishing clear rules and division of labour for TEL development groups. One key finding is that Group members are needed to draw on both different and shared mediating artefacts, supporting the conclusion that the nature of the group's composition and the situated expertise of its members are crucial for project success. The paper's theoretical contribution is an enhanced representation of a TEL policy implementation staircase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interactive experiences are rapidly becoming popular via the surge of ‘escape rooms’; part game and part theatre, the ‘escape’ experience is exploding globally, having gone from zero offered at the outset of 2010 to at least 2800 different experiences available worldwide today. CrashEd is an interactive learning experience that parallels many of the attractions of an escape room – it incorporates a staged, realistic ‘crime scene’ and invites participants to work together to gather forensic evidence and question a witness in order to solve a crime, all whilst competing against a ticking clock. An animation can enhance reality and engage with cognitive processes to help learning; in CrashEd, it is the last piece of the jigsaw that consolidates the students’ incremental acquisition of knowledge to tie together the pieces of evidence, identify a suspect and ultimately solve the crime. This article presents the background to CrashEd and an overview of how a timely placed animation at the end of an educational experience can enhance learning. The lessons learned, from delivering bespoke versions of the experience to different demographic groups, are discussed. The article will consider the successes and challenges raised by the collaborative project, future developments and potential wider implications of the development of CrashEd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research question for this study was: ‘Can the provision of online resources help to engage and motivate students to become self-directed learners?’ This study presents the results of an action research project to answer this question for a postgraduate module at a research-intensive university in the United Kingdom. The analysis of results from the study was conducted dividing the students according to their programme degree – Masters or PhD – and according to their language skills. The study indicated that the online resources embedded in the module were consistently used, and that the measures put in place to support self-directed learning (SDL) were both perceived and valued by the students, irrespective of their programme or native language. Nevertheless, a difference was observed in how students viewed SDL: doctoral students seemed to prefer the approach and were more receptive to it than students pursuing their Masters degree. Some students reported that the SDL activity helped them to achieve more independence than did traditional approaches to teaching. Students who engaged with the online resources were rewarded with higher marks and claimed that they were all the more motivated within the module. Despite the different learning experiences of the diverse cohort, the study found that the blended nature of the course and its resources in support of SDL created a learning environment which positively affected student learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article explores academics’ writing practices, focusing on the ways in which they use digital platforms in their processes of collaborative learning. It draws on interview data from a research project that has involved working closely with academics across different disciplines and institutions to explore their writing practices, understanding academic literacies as situated social practices. The article outlines the characteristics of academics’ ongoing professional learning, demonstrating the importance of collaborations on specific projects in generating learning in relation to using digital platforms and for sharing and collaborating on scholarly writing. A very wide range of digital platforms have been identified by these academics, enabling new kinds of collaboration across time and space on writing and research; but challenges around online learning are also identified, particularly the dangers of engaging in learning in public, the pressures of ‘always-on’-ness and the different values systems around publishing in different forums.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biology is now a “Big Data Science” thanks to technological advancements allowing the characterization of the whole macromolecular content of a cell or a collection of cells. This opens interesting perspectives, but only a small portion of this data may be experimentally characterized. From this derives the demand of accurate and efficient computational tools for automatic annotation of biological molecules. This is even more true when dealing with membrane proteins, on which my research project is focused leading to the development of two machine learning-based methods: BetAware-Deep and SVMyr. BetAware-Deep is a tool for the detection and topology prediction of transmembrane beta-barrel proteins found in Gram-negative bacteria. These proteins are involved in many biological processes and primary candidates as drug targets. BetAware-Deep exploits the combination of a deep learning framework (bidirectional long short-term memory) and a probabilistic graphical model (grammatical-restrained hidden conditional random field). Moreover, it introduced a modified formulation of the hydrophobic moment, designed to include the evolutionary information. BetAware-Deep outperformed all the available methods in topology prediction and reported high scores in the detection task. Glycine myristoylation in Eukaryotes is the binding of a myristic acid on an N-terminal glycine. SVMyr is a fast method based on support vector machines designed to predict this modification in dataset of proteomic scale. It uses as input octapeptides and exploits computational scores derived from experimental examples and mean physicochemical features. SVMyr outperformed all the available methods for co-translational myristoylation prediction. In addition, it allows (as a unique feature) the prediction of post-translational myristoylation. Both the tools here described are designed having in mind best practices for the development of machine learning-based tools outlined by the bioinformatics community. Moreover, they are made available via user-friendly web servers. All this make them valuable tools for filling the gap between sequential and annotated data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Machine Learning makes computers capable of performing tasks typically requiring human intelligence. A domain where it is having a considerable impact is the life sciences, allowing to devise new biological analysis protocols, develop patients’ treatments efficiently and faster, and reduce healthcare costs. This Thesis work presents new Machine Learning methods and pipelines for the life sciences focusing on the unsupervised field. At a methodological level, two methods are presented. The first is an “Ab Initio Local Principal Path” and it is a revised and improved version of a pre-existing algorithm in the manifold learning realm. The second contribution is an improvement over the Import Vector Domain Description (one-class learning) through the Kullback-Leibler divergence. It hybridizes kernel methods to Deep Learning obtaining a scalable solution, an improved probabilistic model, and state-of-the-art performances. Both methods are tested through several experiments, with a central focus on their relevance in life sciences. Results show that they improve the performances achieved by their previous versions. At the applicative level, two pipelines are presented. The first one is for the analysis of RNA-Seq datasets, both transcriptomic and single-cell data, and is aimed at identifying genes that may be involved in biological processes (e.g., the transition of tissues from normal to cancer). In this project, an R package is released on CRAN to make the pipeline accessible to the bioinformatic Community through high-level APIs. The second pipeline is in the drug discovery domain and is useful for identifying druggable pockets, namely regions of a protein with a high probability of accepting a small molecule (a drug). Both these pipelines achieve remarkable results. Lastly, a detour application is developed to identify the strengths/limitations of the “Principal Path” algorithm by analyzing Convolutional Neural Networks induced vector spaces. This application is conducted in the music and visual arts domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid progression of biomedical research coupled with the explosion of scientific literature has generated an exigent need for efficient and reliable systems of knowledge extraction. This dissertation contends with this challenge through a concentrated investigation of digital health, Artificial Intelligence, and specifically Machine Learning and Natural Language Processing's (NLP) potential to expedite systematic literature reviews and refine the knowledge extraction process. The surge of COVID-19 complicated the efforts of scientists, policymakers, and medical professionals in identifying pertinent articles and assessing their scientific validity. This thesis presents a substantial solution in the form of the COKE Project, an initiative that interlaces machine reading with the rigorous protocols of Evidence-Based Medicine to streamline knowledge extraction. In the framework of the COKE (“COVID-19 Knowledge Extraction framework for next-generation discovery science”) Project, this thesis aims to underscore the capacity of machine reading to create knowledge graphs from scientific texts. The project is remarkable for its innovative use of NLP techniques such as a BERT + bi-LSTM language model. This combination is employed to detect and categorize elements within medical abstracts, thereby enhancing the systematic literature review process. The COKE project's outcomes show that NLP, when used in a judiciously structured manner, can significantly reduce the time and effort required to produce medical guidelines. These findings are particularly salient during times of medical emergency, like the COVID-19 pandemic, when quick and accurate research results are critical.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many diseases that affect the thyroid gland, and among them are carcinoma. Thyroid cancer is the most common endocrine neoplasm and the second most frequent cancer in the 0-49 age group. This thesis deals with two studies I conducted during my PhD. The first concerns the development of a Deep Learning model to be able to assist the pathologist in screening of thyroid cytology smears. This tool created in collaboration with Prof. Diciotti, affiliated with the DEI-UNIBO "Guglielmo Marconi" Department of Electrical Energy and Information Engineering, has an important clinical implication in that it allows patients to be stratified between those who should undergo surgery and those who should not. The second concerns the application of spatial transcriptomics on well-differentiated thyroid carcinomas to better understand their invasion mechanisms and thus to better comprehend which genes may be involved in the proliferation of these tumors. This project specifically was made possible through a fruitful collaboration with the Gustave Roussy Institute in Paris. Studying thyroid carcinoma deeply is essential to improve patient care, increase survival rates, and enhance the overall understanding of this prevalent cancer. It can lead to more effective prevention, early detection, and treatment strategies that benefit both patients and the healthcare system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is focused on the design of a flexible, dynamic and innovative telecommunication's system for future 6G applications on vehicular communications. The system is based on the development of drones acting as mobile base stations in an urban scenario to cope with the increasing traffic demand and avoid network's congestion conditions. In particular, the exploitation of Reinforcement Learning algorithms is used to let the drone learn autonomously how to behave in a scenario full of obstacles with the goal of tracking and serve the maximum number of moving vehicles, by at the same time, minimizing the energy consumed to perform its tasks. This project is an extraordinary opportunity to open the doors to a new way of applying and develop telecommunications in an urban scenario by mixing it to the rising world of the Artificial Intelligence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many real-word decision- making problems are defined based on forecast parameters: for example, one may plan an urban route by relying on traffic predictions. In these cases, the conventional approach consists in training a predictor and then solving an optimization problem. This may be problematic since mistakes made by the predictor may trick the optimizer into taking dramatically wrong decisions. Recently, the field of Decision-Focused Learning overcomes this limitation by merging the two stages at training time, so that predictions are rewarded and penalized based on their outcome in the optimization problem. There are however still significant challenges toward a widespread adoption of the method, mostly related to the limitation in terms of generality and scalability. One possible solution for dealing with the second problem is introducing a caching-based approach, to speed up the training process. This project aims to investigate these techniques, in order to reduce even more, the solver calls. For each considered method, we designed a particular smart sampling approach, based on their characteristics. In the case of the SPO method, we ended up discovering that it is only necessary to initialize the cache with only several solutions; those needed to filter the elements that we still need to properly learn. For the Blackbox method, we designed a smart sampling approach, based on inferred solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emissions estimation, both during homologation and standard driving, is one of the new challenges that automotive industries have to face. The new European and American regulation will allow a lower and lower quantity of Carbon Monoxide emission and will require that all the vehicles have to be able to monitor their own pollutants production. Since numerical models are too computationally expensive and approximated, new solutions based on Machine Learning are replacing standard techniques. In this project we considered a real V12 Internal Combustion Engine to propose a novel approach pushing Random Forests to generate meaningful prediction also in extreme cases (extrapolation, very high frequency peaks, noisy instrumentation etc.). The present work proposes also a data preprocessing pipeline for strongly unbalanced datasets and a reinterpretation of the regression problem as a classification problem in a logarithmic quantized domain. Results have been evaluated for two different models representing a pure interpolation scenario (more standard) and an extrapolation scenario, to test the out of bounds robustness of the model. The employed metrics take into account different aspects which can affect the homologation procedure, so the final analysis will focus on combining all the specific performances together to obtain the overall conclusions.