719 resultados para Learning and teaching
Resumo:
Most of the existing open-source search engines, utilize keyword or tf-idf based techniques to find relevant documents and web pages relative to an input query. Although these methods, with the help of a page rank or knowledge graphs, proved to be effective in some cases, they often fail to retrieve relevant instances for more complicated queries that would require a semantic understanding to be exploited. In this Thesis, a self-supervised information retrieval system based on transformers is employed to build a semantic search engine over the library of Gruppo Maggioli company. Semantic search or search with meaning can refer to an understanding of the query, instead of simply finding words matches and, in general, it represents knowledge in a way suitable for retrieval. We chose to investigate a new self-supervised strategy to handle the training of unlabeled data based on the creation of pairs of ’artificial’ queries and the respective positive passages. We claim that by removing the reliance on labeled data, we may use the large volume of unlabeled material on the web without being limited to languages or domains where labeled data is abundant.
Resumo:
The rapidly changing digital landscape is having a significant influence on learning and teaching. Our study assesses the response of one higher education institution (HEI) to the changing digital landscape and its transition into enhanced blended learning, which seeks to go beyond the early implementation stage to make the most effective use of online learning technologies to enhance the student experience and student learning outcomes. Evidence from a qualitative study comprising 20 semi-structured interviews, informed by a literature review, has resulted in the development of a holistic framework to guide HEIs transitioning into enhanced blended learning. The proposed framework addresses questions relating to the why (change agents), what (institutional considerations), how (organisational preparedness) and who (stakeholders) of transitions into enhanced blended learning. The involvement of all stakeholder groups is essential to a successful institutional transition into enhanced blended learning.
Resumo:
This paper documents the development and findings of the Good Practice Report on Technology-Enhanced Learning and Teaching funded by the Australian Learning and Teaching Council (ALTC). Developing the Good Practice Report required a meta-analysis of 33 ALTC learning and teaching projects relating to technology funded between 2006 and 2010. This report forms one of 12 completed Good Practice Reports on a range of different topics commissioned by the ALTC and Australian Government Office for Learning and Teaching (OLT). The reports aim to reduce issues relating to dissemination that projects face within the sector by providing educators with an efficient and accessible way of engaging with and filtering through the resources and experiences of numerous learning and teaching projects funded by the ALTC and OLT. The Technology-Enhanced Learning and Teaching Report highlights examples of good practice and provides outcomes and recommendations based on the meta-analysis of the relevant learning and teaching projects. However, in order to ensure the value of these reports is realised, educators need to engage with the reports and integrate the information and findings into their practice. The paper concludes by detailing how educational networks can be utilised to support dissemination.
Resumo:
In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.
Resumo:
This dissertation contributes to the scholarly debate on temporary teams by exploring team interactions and boundaries.The fundamental challenge in temporary teams originates from temporary participation in the teams. First, as participants join the team for a short period of time, there is not enough time to build trust, share understanding, and have effective interactions. Consequently, team outputs and practices built on team interactions become vulnerable. Secondly, as team participants move on and off the teams, teams’ boundaries become blurred over time. It leads to uncertainty among team participants and leaders about who is/is not identified as a team member causing collective disagreement within the team. Focusing on the above mentioned challenges, we conducted this research in healthcare organisations since the use of temporary teams in healthcare and hospital setting is prevalent. In particular, we focused on orthopaedic teams that provide personalised treatments for patients using 3D printing technology. Qualitative and quantitative data were collected using interviews, observations, questionnaires and archival data at Rizzoli Orthopaedic Institute, Bologna, Italy. This study provides the following research outputs. The first is a conceptual study that explores temporary teams’ literature using bibliometric analysis and systematic literature review to highlight research gaps. The second paper qualitatively studies temporary relationships within the teams by collecting data using group interviews and observations. The results highlighted the role of short-term dyadic relationships as a ground to share and transfer knowledge at the team level. Moreover, hierarchical structure of the teams facilitates knowledge sharing by supporting dyadic relationships within and beyond the team meetings. The third paper investigates impact of blurred boundaries on temporary teams’ performance. Using quantitative data collected through questionnaires and archival data, we concluded that boundary blurring in terms of fluidity, overlap and dispersion differently impacts team performance at high and low levels of task complexity.
Resumo:
The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.
Resumo:
Creativity seems mysterious; when we experience a creative spark, it is difficult to explain how we got that idea, and we often recall notions like ``inspiration" and ``intuition" when we try to explain the phenomenon. The fact that we are clueless about how a creative idea manifests itself does not necessarily imply that a scientific explanation cannot exist. We are unaware of how we perform certain tasks, such as biking or language understanding, but we have more and more computational techniques that can replicate and hopefully explain such activities. We should understand that every creative act is a fruit of experience, society, and culture. Nothing comes from nothing. Novel ideas are never utterly new; they stem from representations that are already in mind. Creativity involves establishing new relations between pieces of information we had already: then, the greater the knowledge, the greater the possibility of finding uncommon connections, and the more the potential to be creative. In this vein, a beneficial approach to a better understanding of creativity must include computational or mechanistic accounts of such inner procedures and the formation of the knowledge that enables such connections. That is the aim of Computational Creativity: to develop computational systems for emulating and studying creativity. Hence, this dissertation focuses on these two related research areas: discussing computational mechanisms to generate creative artifacts and describing some implicit cognitive processes that can form the basis for creative thoughts.
Resumo:
The development of Next Generation Sequencing promotes Biology in the Big Data era. The ever-increasing gap between proteins with known sequences and those with a complete functional annotation requires computational methods for automatic structure and functional annotation. My research has been focusing on proteins and led so far to the development of three novel tools, DeepREx, E-SNPs&GO and ISPRED-SEQ, based on Machine and Deep Learning approaches. DeepREx computes the solvent exposure of residues in a protein chain. This problem is relevant for the definition of structural constraints regarding the possible folding of the protein. DeepREx exploits Long Short-Term Memory layers to capture residue-level interactions between positions distant in the sequence, achieving state-of-the-art performances. With DeepRex, I conducted a large-scale analysis investigating the relationship between solvent exposure of a residue and its probability to be pathogenic upon mutation. E-SNPs&GO predicts the pathogenicity of a Single Residue Variation. Variations occurring on a protein sequence can have different effects, possibly leading to the onset of diseases. E-SNPs&GO exploits protein embeddings generated by two novel Protein Language Models (PLMs), as well as a new way of representing functional information coming from the Gene Ontology. The method achieves state-of-the-art performances and is extremely time-efficient when compared to traditional approaches. ISPRED-SEQ predicts the presence of Protein-Protein Interaction sites in a protein sequence. Knowing how a protein interacts with other molecules is crucial for accurate functional characterization. ISPRED-SEQ exploits a convolutional layer to parse local context after embedding the protein sequence with two novel PLMs, greatly surpassing the current state-of-the-art. All methods are published in international journals and are available as user-friendly web servers. They have been developed keeping in mind standard guidelines for FAIRness (FAIR: Findable, Accessible, Interoperable, Reusable) and are integrated into the public collection of tools provided by ELIXIR, the European infrastructure for Bioinformatics.
Resumo:
Recent scholarly works on the relationship between ‘fashion’ and ‘sustainability’ have identified a need for a systemic transition towards fashion media ‘for sustaianbility’. Nevertheless, the academic research on the topic is still limited and rather circumscribed to the analysis of marketing practices, while only recently some more systemic and critical analyses of the symbolic production of sustainability through fashion media have been undertaken. Responding to this need for an in-depth investigation of ‘sustainability’-related media production, my research focuses on the ‘fashion sustainability’-related discursive formations in the context of one of the most influential fashion magazines today – Vogue Italia. In order to investigate the ways in which the ‘sustainability’ discourse was formed and has evolved, the study considered the entire Vogue Italia archive from 1965 to 2021. The data collection was carried out in two phases, and the individualised relevant discursive units were then in-depth and critically analysed to allow for a grounded assessment of the media giant’s position. The Discourse-Historical Approach provided a methodological base for the analysis, which took into consideration the various levels of context: the immediate textual and intertextual, but also the broader socio-cultural context of the predominant, over-production oriented and capital-led fashion system. The findings led to a delineation of the evolution of the ‘fashion sustainability’ discourse, unveiling how despite Vogue Italia’s auto-determination as attentive to ‘sustainability’-related topics, the magazine is systemically employing discursive strategies which significantly mitigate the meaning of the ‘sustainable commitment’ and thus the meaning of ‘fashion sustainability’.
Resumo:
There are many diseases that affect the thyroid gland, and among them are carcinoma. Thyroid cancer is the most common endocrine neoplasm and the second most frequent cancer in the 0-49 age group. This thesis deals with two studies I conducted during my PhD. The first concerns the development of a Deep Learning model to be able to assist the pathologist in screening of thyroid cytology smears. This tool created in collaboration with Prof. Diciotti, affiliated with the DEI-UNIBO "Guglielmo Marconi" Department of Electrical Energy and Information Engineering, has an important clinical implication in that it allows patients to be stratified between those who should undergo surgery and those who should not. The second concerns the application of spatial transcriptomics on well-differentiated thyroid carcinomas to better understand their invasion mechanisms and thus to better comprehend which genes may be involved in the proliferation of these tumors. This project specifically was made possible through a fruitful collaboration with the Gustave Roussy Institute in Paris. Studying thyroid carcinoma deeply is essential to improve patient care, increase survival rates, and enhance the overall understanding of this prevalent cancer. It can lead to more effective prevention, early detection, and treatment strategies that benefit both patients and the healthcare system.
Resumo:
Trying to explain to a robot what to do is a difficult undertaking, and only specific types of people have been able to do so far, such as programmers or operators who have learned how to use controllers to communicate with a robot. My internship's goal was to create and develop a framework that would make that easier. The system uses deep learning techniques to recognize a set of hand gestures, both static and dynamic. Then, based on the gesture, it sends a command to a robot. To be as generic as feasible, the communication is implemented using Robot Operating System (ROS). Furthermore, users can add new recognizable gestures and link them to new robot actions; a finite state automaton enforces the users' input verification and correct action sequence. Finally, the users can create and utilize a macro to describe a sequence of actions performable by a robot.
Resumo:
Il machine learning negli ultimi anni ha acquisito una crescente popolarità nell’ambito della ricerca scientifica e delle sue applicazioni. Lo scopo di questa tesi è stato quello di studiare il machine learning nei suoi aspetti generali e applicarlo a problemi di computer vision. La tesi ha affrontato le difficoltà del dover spiegare dal punto di vista teorico gli algoritmi alla base delle reti neurali convoluzionali e ha successivamente trattato due problemi concreti di riconoscimento immagini: il dataset MNIST (immagini di cifre scritte a mano) e un dataset che sarà chiamato ”MELANOMA dataset” (immagini di melanomi e nevi sani). Utilizzando le tecniche spiegate nella sezione teorica si sono riusciti ad ottenere risultati soddifacenti per entrambi i dataset ottenendo una precisione del 98% per il MNIST e del 76.8% per il MELANOMA dataset
Resumo:
The main subject of this article is to show the parallelism betwen the Ellingham and Van't Hoff diagrams. The first one is a graphic representation of the changes in the standard Gibbs free energy (deltarGtheta) as a function of T and was introduced by Ellingham in 1944, in order to study metallurgic processes involving oxides and sulphides. On the other hand, the Van't Hoff diagram is a representation of the function ln K versus (1/T). The equivalence between both diagrams is easily demonstrated, making simple mathematical manipulations. In order to show the parallelism between both diagrams, they are presented briefly and two examples are discussed. The comparison of the both diagrams surely will be helpful to students and teachers in their learning and teaching activities, and will certainly enrich important aspects of chemical thermodynamics.
Resumo:
“Closing the gap in curriculum development leadership” is a Carrick-funded University of Queensland project which is designed to address two related gaps in current knowledge and in existing professional development programs for academic staff. The first gap is in our knowledge of curriculum and pedagogical issues as they arise in relation to multi-year sequences of study, such as majors in generalist degrees, or core programs in more structured degrees. While there is considerable knowledge of curriculum and pedagogy at the course or individual unit of study level (e.g. Philosophy I), there is very little properly conceptualised, empirically informed knowledge about student learning (and teaching) over, say, a three-year major sequence in a traditional Arts or Sciences subject. The Carrick-funded project aims to (begin to) fill this gap through bottom-up curriculum development projects across the range of UQ’s offerings. The second gap is in our professional development programs and, indeed, in our recognition and support for the people who are in charge of such multi-year sequences of study. The major convener or program coordinator is not as well supported, in Australian and overseas professional development programs, as the lecturer in charge of a single course (or unit of study). Nor is her work likely to be taken account of in workload calculations or for the purposes of promotion and career advancement more generally. The Carrick-funded project aims to fill this gap by developing, in consultation with crucial stakeholders, amendments to existing university policies and practices. The attached documents provide a useful introduction to the project. For more information, please contact Fred D’Agostino at f.dagostino@uq.edu.au.
Resumo:
18th SPACE Annual Conference and EURASHE-SEPHE Seminar 21-24 March 2007 Thursday 22 March 2007