12 resultados para Studio Based Learning

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’istruzione superiore in Europa è stata oggetto di un significativo processo di riforma: è aumentato l’interesse per un modello di apprendimento intorno ai progetti, centrato sullo studente, che favorisse lo sviluppo di competenze trasversali – il project-based learning (PBL). Inserire il PBL nelle Università richiede un processo di innovazione didattica: il curriculum di un corso PBL e le competenze richieste all’insegnante si differenziano dall’apprendimento tradizionale. Senza un'adeguata attenzione ai metodi di supporto per insegnanti e studenti, questi approcci innovativi non saranno ampiamente adottati. L’obiettivo di questo studio è determinare in che modo sia possibile implementare un corso PBL non presenziato da figure esperte di PBL. Le domande della ricerca sono: è possibile implementare efficacemente un approccio PBL senza il coinvolgimento di esperti dei metodi di progettazione? come si declinano i ruoli della facilitazione secondo questa configurazione: come si definisce il ruolo di tutor d’aula? come rafforzare il supporto per l’implementazione del corso? Per rispondere alle domande di ricerca è stata utilizzata la metodologia AIM-R. Viene presentata la prima iterazione dell’implementazione di un corso di questo tipo, durante la quale sono state svolte attività di ricerca e raccolta dati. L’attività di facilitazione è affidata a tre figure diverse: docente, tutor d’aula e coach professionisti. Su questa base, sono stati definiti gli elementi costituenti un kit di materiale a supporto per l’implementazione di corsi PBL. Oltre a un set di documenti e strumenti condivisi, sono stati elaborati i vademecum per guidare studenti, tutor e docenti all’implementazione di questo tipo di corsi. Ricerche future dovranno essere volte a identificare fattori aggiuntivi che rendano applicabile il kit di supporto per corsi basati su un modello diverso dal Tech to Market o che utilizzino strumenti di progettazione diversi da quelli proposti durante la prima iterazione.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent experiments have revealed the fundamental importance of neuromodulatory action on activity-dependent synaptic plasticity underlying behavioral learning and spatial memory formation. Neuromodulators affect synaptic plasticity through the modification of the dynamics of receptors on the synaptic membrane. However, chemical substances other than neuromodulators, such as receptors co-agonists, can influence the receptors' dynamics and thus participate in determining plasticity. Here we focus on D-serine, which has been observed to affect the activity thresholds of synaptic plasticity by co-activating NMDA receptors. We use a computational model for spatial value learning with plasticity between two place cell layers. The D-serine release is CB1R mediated and the model reproduces the impairment of spatial memory due to the astrocytic CB1R knockout for a mouse navigating in the Morris water maze. The addition of path-constraining obstacles shows how performance impairment depends on the environment's topology. The model can explain the experimental evidence and produce useful testable predictions to increase our understanding of the complex mechanisms underlying learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questo volume di tesi, dal titolo “Sviluppo di una piattaforma per fornire contenuti formativi sfruttando la gamification: un caso di studio aziendale”, tratta argomenti quali e-learning e game-based learning e come/quando questi possono essere applicati, presentando inoltre un esempio di prototipo di applicazione web che può fungere a questo scopo. Nello specifico, il primo capitolo si compone di tre sezioni principali: la prima introduce il concetto di e-learning e le molteplici declinazioni ad esso applicabili, oltre a presentare qualche cenno di carattere storico per individuare questo fenomeno nel tempo; la seconda tratta i campi d’applicazione e le tipologie di didattica inscrivibili nel termine “Game-based learning”. Nella terza sezione, “builder per esperienze gamificate”, infine, vengono presentate e analizzate due applicazioni web che possono concorrere alla creazione di un’esperienza di formazione gamificata in ambito scolastico e/o lavorativo. Il secondo e il terzo capitolo, rispettivamente con titoli “Tecnologie” e “Applicazione web: BKM – Learning Game”, sono fortemente correlati: vengono infatti presentate le tecnologie (nello specifico HTML, CSS, Javascript, NodeJs, VueJs e JSON) utilizzate per la creazione del progetto di tesi, poi viene descritto l’applicativo web risultante nel suo complesso. Il progetto è stato implementato durante il tirocinio in preparazione della prova finale, presso l’azienda Bookmark s.r.l.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Con l’avvento di Internet, potentissimo strumento tecnologico di diffusione di informazioni e di comunicazione a distanza, anche le modalità di apprendimento sono cambiate: persino nelle scuole si tende a non utilizzare più i classici libri di testo, ma ad utilizzare dispositivi dai quali scaricare in formato elettronico, libri, dispense, test, video ed ogni altro genere di materiale di apprendimento, dando vita a un vero e proprio nuovo modo di apprendere chiamato E-learning, più veloce, comodo e ricco di alternative rispetto al vecchio modello offline che si presentava sottoforma di floppy inizialmente e poi di CD-ROM. E-learning significa, electronic based learning, ed è appunto una vera e propria metodologia di didattica che sfrutta e viene facilitata da risorse e servizi disponibili e accessibili virtualmente in rete. Al momento vi sono numerose piattaforme di E-learning, una delle quali è il nucleo di questa tesi, ovvero il tool autore AContent. Questo documento di tesi, infatti, raccoglie la descrizione della progettazione e della fase implementativa della gestione delle politiche di copyright per il tool AContent. L’obbiettivo è quello di rendere possibile l’assegnazione di un copyright a qualsiasi tipo di materiale didattico venga creato, caricato e/o condiviso sulla piattaforma in questione. Pertanto l’idea è stata quella di dare la possibilità di scegliere fra più copyright preimpostati, utilizzando degli standard di licenze riguardanti i diritti d’autore, lasciando anche l’opportunità di inserire la propria politica.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Questa tesi si occupa dell’estensione di un framework software finalizzato all'individuazione e al tracciamento di persone in una scena ripresa da telecamera stereoscopica. In primo luogo è rimossa la necessità di una calibrazione manuale offline del sistema sfruttando algoritmi che consentono di individuare, a partire da un fotogramma acquisito dalla camera, il piano su cui i soggetti tracciati si muovono. Inoltre, è introdotto un modulo software basato su deep learning con lo scopo di migliorare la precisione del tracciamento. Questo componente, che è in grado di individuare le teste presenti in un fotogramma, consente ridurre i dati analizzati al solo intorno della posizione effettiva di una persona, escludendo oggetti che l’algoritmo di tracciamento sarebbe portato a individuare come persone.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The job of a historian is to understand what happened in the past, resorting in many cases to written documents as a firsthand source of information. Text, however, does not amount to the only source of knowledge. Pictorial representations, in fact, have also accompanied the main events of the historical timeline. In particular, the opportunity of visually representing circumstances has bloomed since the invention of photography, with the possibility of capturing in real-time the occurrence of a specific events. Thanks to the widespread use of digital technologies (e.g. smartphones and digital cameras), networking capabilities and consequent availability of multimedia content, the academic and industrial research communities have developed artificial intelligence (AI) paradigms with the aim of inferring, transferring and creating new layers of information from images, videos, etc. Now, while AI communities are devoting much of their attention to analyze digital images, from an historical research standpoint more interesting results may be obtained analyzing analog images representing the pre-digital era. Within the aforementioned scenario, the aim of this work is to analyze a collection of analog documentary photographs, building upon state-of-the-art deep learning techniques. In particular, the analysis carried out in this thesis aims at producing two following results: (a) produce the date of an image, and, (b) recognizing its background socio-cultural context,as defined by a group of historical-sociological researchers. Given these premises, the contribution of this work amounts to: (i) the introduction of an historical dataset including images of “Family Album” among all the twentieth century, (ii) the introduction of a new classification task regarding the identification of the socio-cultural context of an image, (iii) the exploitation of different deep learning architectures to perform the image dating and the image socio-cultural context classification.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Most of the existing open-source search engines, utilize keyword or tf-idf based techniques to find relevant documents and web pages relative to an input query. Although these methods, with the help of a page rank or knowledge graphs, proved to be effective in some cases, they often fail to retrieve relevant instances for more complicated queries that would require a semantic understanding to be exploited. In this Thesis, a self-supervised information retrieval system based on transformers is employed to build a semantic search engine over the library of Gruppo Maggioli company. Semantic search or search with meaning can refer to an understanding of the query, instead of simply finding words matches and, in general, it represents knowledge in a way suitable for retrieval. We chose to investigate a new self-supervised strategy to handle the training of unlabeled data based on the creation of pairs of ’artificial’ queries and the respective positive passages. We claim that by removing the reliance on labeled data, we may use the large volume of unlabeled material on the web without being limited to languages or domains where labeled data is abundant.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The final goal of the thesis should be a real-world application in the production test data environment. This includes the pre-processing of the data, building models and visualizing the results. To do this, different machine learning models, outlier prediction oriented, should be investigated using a real dataset. Finally, the different outlier prediction algorithms should be compared, and their performance discussed.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The main objective of my thesis work is to exploit the Google native and open-source platform Kubeflow, specifically using Kubeflow pipelines, to execute a Federated Learning scalable ML process in a 5G-like and simplified test architecture hosting a Kubernetes cluster and apply the largely adopted FedAVG algorithm and FedProx its optimization empowered by the ML platform ‘s abilities to ease the development and production cycle of this specific FL process. FL algorithms are more are and more promising and adopted both in Cloud application development and 5G communication enhancement through data coming from the monitoring of the underlying telco infrastructure and execution of training and data aggregation at edge nodes to optimize the global model of the algorithm ( that could be used for example for resource provisioning to reach an agreed QoS for the underlying network slice) and after a study and a research over the available papers and scientific articles related to FL with the help of the CTTC that suggests me to study and use Kubeflow to bear the algorithm we found out that this approach for the whole FL cycle deployment was not documented and may be interesting to investigate more in depth. This study may lead to prove the efficiency of the Kubeflow platform itself for this need of development of new FL algorithms that will support new Applications and especially test the FedAVG algorithm performances in a simulated client to cloud communication using a MNIST dataset for FL as benchmark.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Unmanned Aerial Vehicle (UAVs) equipped with cameras have been fast deployed to a wide range of applications, such as smart cities, agriculture or search and rescue applications. Even though UAV datasets exist, the amount of open and quality UAV datasets is limited. So far, we want to overcome this lack of high quality annotation data by developing a simulation framework for a parametric generation of synthetic data. The framework accepts input via a serializable format. The input specifies which environment preset is used, the objects to be placed in the environment along with their position and orientation as well as additional information such as object color and size. The result is an environment that is able to produce UAV typical data: RGB image from the UAVs camera, altitude, roll, pitch and yawn of the UAV. Beyond the image generation process, we improve the resulting image data photorealism by using Synthetic-To-Real transfer learning methods. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different - although related - problem. This approach has been widely researched in other affine fields and results demonstrate it to be an interesing area to investigate. Since simulated images are easy to create and synthetic-to-real translation has shown good quality results, we are able to generate pseudo-realistic images. Furthermore, object labels are inherently given, so we are capable of extending the already existing UAV datasets with realistic quality images and high resolution meta-data. During the development of this thesis we have been able to produce a result of 68.4% on UAVid. This can be considered a new state-of-art result on this dataset.