16 resultados para learning and digital media
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The objective of the present research is to describe and explain populist actors and populism as a concept and their representation on social and legacy media during the 2019 EU elections in Finland, Italy and The Netherlands. This research tackles the topic of European populism in the context of political communication and its relation to both the legacy and digital media within the hybrid media system. Departing from the consideration that populism and populist rhetoric are challenging concepts to define, I suggest that they should be addressed and analyzed through the usage of a combination of methods and theoretical perspectives, namely Communication Studies, Corpus Linguistics, Political theory, Rhetoric and Corpus-Assisted Discourse Studies. This thesis considers data of different provenance. On the one hand, for the Legacy media part, newspapers articles were collected in the three countries under study from the 1st until the 31st of May 2019. Each country’s legacy system is represented by three different quality papers and the articles were collected according to a selection of keywords (European Union Elections and Populism in each of the three languages). On the other hand, the Digital media data takes into consideration Twitter tweets collected during the same timeframe based on particular country-specific hashtags and tweets by identified populist actors. In order to meet the objective of this study, three research questions are posed and the analysis leading to the results are exhaustively presented and further discussed. The results of this research provide valuable and novel insights on how populism as a theme and a concept is being portrayed in the context of the European elections both in legacy and digital media and political communication in general.
Resumo:
Recent scholarly works on the relationship between ‘fashion’ and ‘sustainability’ have identified a need for a systemic transition towards fashion media ‘for sustaianbility’. Nevertheless, the academic research on the topic is still limited and rather circumscribed to the analysis of marketing practices, while only recently some more systemic and critical analyses of the symbolic production of sustainability through fashion media have been undertaken. Responding to this need for an in-depth investigation of ‘sustainability’-related media production, my research focuses on the ‘fashion sustainability’-related discursive formations in the context of one of the most influential fashion magazines today – Vogue Italia. In order to investigate the ways in which the ‘sustainability’ discourse was formed and has evolved, the study considered the entire Vogue Italia archive from 1965 to 2021. The data collection was carried out in two phases, and the individualised relevant discursive units were then in-depth and critically analysed to allow for a grounded assessment of the media giant’s position. The Discourse-Historical Approach provided a methodological base for the analysis, which took into consideration the various levels of context: the immediate textual and intertextual, but also the broader socio-cultural context of the predominant, over-production oriented and capital-led fashion system. The findings led to a delineation of the evolution of the ‘fashion sustainability’ discourse, unveiling how despite Vogue Italia’s auto-determination as attentive to ‘sustainability’-related topics, the magazine is systemically employing discursive strategies which significantly mitigate the meaning of the ‘sustainable commitment’ and thus the meaning of ‘fashion sustainability’.
Resumo:
Graphene, that is a monolayer of carbon atoms arranged in a honeycomb lattice, has been isolated only recently from graphite. This material shows very attractive physical properties, like superior carrier mobility, current carrying capability and thermal conductivity. In consideration of that, graphene has been the object of large investigation as a promising candidate to be used in nanometer-scale devices for electronic applications. In this work, graphene nanoribbons (GNRs), that are narrow strips of graphene, for which a band-gap is induced by the quantum confinement of carriers in the transverse direction, have been studied. As experimental GNR-FETs are still far from being ideal, mainly due to the large width and edge roughness, an accurate description of the physical phenomena occurring in these devices is required to have valuable predictions about the performance of these novel structures. A code has been developed to this purpose and used to investigate the performance of 1 to 15-nm wide GNR-FETs. Due to the importance of an accurate description of the quantum effects in the operation of graphene devices, a full-quantum transport model has been adopted: the electron dynamics has been described by a tight-binding (TB) Hamiltonian model and transport has been solved within the formalism of the non-equilibrium Green's functions (NEGF). Both ballistic and dissipative transport are considered. The inclusion of the electron-phonon interaction has been taken into account in the self-consistent Born approximation. In consideration of their different energy band-gap, narrow GNRs are expected to be suitable for logic applications, while wider ones could be promising candidates as channel material for radio-frequency applications.
Resumo:
The aim of this thesis was to investigate the respective contribution of prior information and sensorimotor constraints to action understanding, and to estimate their consequences on the evolution of human social learning. Even though a huge amount of literature is dedicated to the study of action understanding and its role in social learning, these issues are still largely debated. Here, I critically describe two main perspectives. The first perspective interprets faithful social learning as an outcome of a fine-grained representation of others’ actions and intentions that requires sophisticated socio-cognitive skills. In contrast, the second perspective highlights the role of simpler decision heuristics, the recruitment of which is determined by individual and ecological constraints. The present thesis aims to show, through four experimental works, that these two contributions are not mutually exclusive. A first study investigates the role of the inferior frontal cortex (IFC), the anterior intraparietal area (AIP) and the primary somatosensory cortex (S1) in the recognition of other people’s actions, using a transcranial magnetic stimulation adaptation paradigm (TMSA). The second work studies whether, and how, higher-order and lower-order prior information (acquired from the probabilistic sampling of past events vs. derived from an estimation of biomechanical constraints of observed actions) interacts during the prediction of other people’s intentions. Using a single-pulse TMS procedure, the third study investigates whether the interaction between these two classes of priors modulates the motor system activity. The fourth study tests the extent to which behavioral and ecological constraints influence the emergence of faithful social learning strategies at a population level. The collected data contribute to elucidate how higher-order and lower-order prior expectations interact during action prediction, and clarify the neural mechanisms underlying such interaction. Finally, these works provide/open promising perspectives for a better understanding of social learning, with possible extensions to animal models.
Resumo:
Come dimostrano i sempre più numerosi casi di cronaca riportati dai notiziari, la preoccupazione per la gestione delle immagini di morte si configura come un nodo centrale che coinvolge spettatori, produttori di contenuti e broadcaster, dato che la sua emersione nel panorama mediale in cui siamo immersi è sempre più evidente. Se la letteratura socio-antropologica è generalmente concorde nel ritenere che, rispetto al passato, oggi la morte si manifesti con meno evidenza nella vita comune delle persone, che tendono a rimuovere i segni della contiguità vivendo il lutto in forma privata, essa è però percepita in modo pervasivo perché disseminata nei (e dai) media. L'elaborato, concentrandosi in maniera specifica sulle produzioni audiovisive, e quindi sulla possibilità intrinseca al cinema – e alle sue forme derivate – di registrare un evento in diretta, tenta di mappare alcune dinamiche di produzione e fruizione considerando una particolare manifestazione della morte: quella che viene comunemente indicata come “morte in diretta”. Dopo una prima ricognizione dedicata alla tensione continua tra la spinta a considerare la morte come l'ultimo tabù e le manifestazioni che essa assume all'interno della “necrocultura”, appare chiaro che il paradigma pornografico risulta ormai inefficace a delineare compiutamente le emersioni della morte nei media, soggetta a opacità e interdizioni variabili, e necessita dunque di prospettive analitiche più articolate. Il fulcro dell'analisi è dunque la produzione e il consumo di precisi filoni quali snuff, cannibal e mondo movie e quelle declinazioni del gore che hanno ibridato reale e fittizio: il tentativo è tracciare un percorso che, a partire dal cinema muto, giunga al panorama contemporaneo e alle pratiche di remix rese possibili dai media digitali, toccando episodi controversi come i Video Nasties, le dinamiche di moral panic scatenate dagli snuff film e quelle di contagio derivanti dalla manipolazione e diffusione delle immagini di morte.
Resumo:
In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.
Resumo:
This dissertation contributes to the scholarly debate on temporary teams by exploring team interactions and boundaries.The fundamental challenge in temporary teams originates from temporary participation in the teams. First, as participants join the team for a short period of time, there is not enough time to build trust, share understanding, and have effective interactions. Consequently, team outputs and practices built on team interactions become vulnerable. Secondly, as team participants move on and off the teams, teams’ boundaries become blurred over time. It leads to uncertainty among team participants and leaders about who is/is not identified as a team member causing collective disagreement within the team. Focusing on the above mentioned challenges, we conducted this research in healthcare organisations since the use of temporary teams in healthcare and hospital setting is prevalent. In particular, we focused on orthopaedic teams that provide personalised treatments for patients using 3D printing technology. Qualitative and quantitative data were collected using interviews, observations, questionnaires and archival data at Rizzoli Orthopaedic Institute, Bologna, Italy. This study provides the following research outputs. The first is a conceptual study that explores temporary teams’ literature using bibliometric analysis and systematic literature review to highlight research gaps. The second paper qualitatively studies temporary relationships within the teams by collecting data using group interviews and observations. The results highlighted the role of short-term dyadic relationships as a ground to share and transfer knowledge at the team level. Moreover, hierarchical structure of the teams facilitates knowledge sharing by supporting dyadic relationships within and beyond the team meetings. The third paper investigates impact of blurred boundaries on temporary teams’ performance. Using quantitative data collected through questionnaires and archival data, we concluded that boundary blurring in terms of fluidity, overlap and dispersion differently impacts team performance at high and low levels of task complexity.
Resumo:
The chapters of the thesis focus on a limited variety of selected themes in EU privacy and data protection law. Chapter 1 sets out the general introduction on the research topic. Chapter 2 touches upon the methodology used in the research. Chapter 3 conceptualises the basic notions from a legal standpoint. Chapter 4 examines the current regulatory regime applicable to digital health technologies, healthcare emergencies, privacy, and data protection. Chapter 5 provides case studies on the application deployed in the Covid-19 scenario, from the perspective of privacy and data protection. Chapter 6 addresses the post-Covid European regulatory initiatives on the subject matter, and its potential effects on privacy and data protection. Chapter 7 is the outcome of a six-month internship with a company in Italy and focuses on the protection of fundamental rights through common standardisation and certification, demonstrating that such standards can serve as supporting tools to guarantee the right to privacy and data protection in digital health technologies. The thesis concludes with the observation that finding and transposing European privacy and data protection standards into scenarios, such as public healthcare emergencies where digital health technologies are deployed, requires rapid coordination between the European Data Protection Authorities and the Member States guarantee that individual privacy and data protection rights are ensured.
Resumo:
The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.
Resumo:
Creativity seems mysterious; when we experience a creative spark, it is difficult to explain how we got that idea, and we often recall notions like ``inspiration" and ``intuition" when we try to explain the phenomenon. The fact that we are clueless about how a creative idea manifests itself does not necessarily imply that a scientific explanation cannot exist. We are unaware of how we perform certain tasks, such as biking or language understanding, but we have more and more computational techniques that can replicate and hopefully explain such activities. We should understand that every creative act is a fruit of experience, society, and culture. Nothing comes from nothing. Novel ideas are never utterly new; they stem from representations that are already in mind. Creativity involves establishing new relations between pieces of information we had already: then, the greater the knowledge, the greater the possibility of finding uncommon connections, and the more the potential to be creative. In this vein, a beneficial approach to a better understanding of creativity must include computational or mechanistic accounts of such inner procedures and the formation of the knowledge that enables such connections. That is the aim of Computational Creativity: to develop computational systems for emulating and studying creativity. Hence, this dissertation focuses on these two related research areas: discussing computational mechanisms to generate creative artifacts and describing some implicit cognitive processes that can form the basis for creative thoughts.
Resumo:
The development of Next Generation Sequencing promotes Biology in the Big Data era. The ever-increasing gap between proteins with known sequences and those with a complete functional annotation requires computational methods for automatic structure and functional annotation. My research has been focusing on proteins and led so far to the development of three novel tools, DeepREx, E-SNPs&GO and ISPRED-SEQ, based on Machine and Deep Learning approaches. DeepREx computes the solvent exposure of residues in a protein chain. This problem is relevant for the definition of structural constraints regarding the possible folding of the protein. DeepREx exploits Long Short-Term Memory layers to capture residue-level interactions between positions distant in the sequence, achieving state-of-the-art performances. With DeepRex, I conducted a large-scale analysis investigating the relationship between solvent exposure of a residue and its probability to be pathogenic upon mutation. E-SNPs&GO predicts the pathogenicity of a Single Residue Variation. Variations occurring on a protein sequence can have different effects, possibly leading to the onset of diseases. E-SNPs&GO exploits protein embeddings generated by two novel Protein Language Models (PLMs), as well as a new way of representing functional information coming from the Gene Ontology. The method achieves state-of-the-art performances and is extremely time-efficient when compared to traditional approaches. ISPRED-SEQ predicts the presence of Protein-Protein Interaction sites in a protein sequence. Knowing how a protein interacts with other molecules is crucial for accurate functional characterization. ISPRED-SEQ exploits a convolutional layer to parse local context after embedding the protein sequence with two novel PLMs, greatly surpassing the current state-of-the-art. All methods are published in international journals and are available as user-friendly web servers. They have been developed keeping in mind standard guidelines for FAIRness (FAIR: Findable, Accessible, Interoperable, Reusable) and are integrated into the public collection of tools provided by ELIXIR, the European infrastructure for Bioinformatics.
Resumo:
There are many diseases that affect the thyroid gland, and among them are carcinoma. Thyroid cancer is the most common endocrine neoplasm and the second most frequent cancer in the 0-49 age group. This thesis deals with two studies I conducted during my PhD. The first concerns the development of a Deep Learning model to be able to assist the pathologist in screening of thyroid cytology smears. This tool created in collaboration with Prof. Diciotti, affiliated with the DEI-UNIBO "Guglielmo Marconi" Department of Electrical Energy and Information Engineering, has an important clinical implication in that it allows patients to be stratified between those who should undergo surgery and those who should not. The second concerns the application of spatial transcriptomics on well-differentiated thyroid carcinomas to better understand their invasion mechanisms and thus to better comprehend which genes may be involved in the proliferation of these tumors. This project specifically was made possible through a fruitful collaboration with the Gustave Roussy Institute in Paris. Studying thyroid carcinoma deeply is essential to improve patient care, increase survival rates, and enhance the overall understanding of this prevalent cancer. It can lead to more effective prevention, early detection, and treatment strategies that benefit both patients and the healthcare system.
Resumo:
The rapid progression of biomedical research coupled with the explosion of scientific literature has generated an exigent need for efficient and reliable systems of knowledge extraction. This dissertation contends with this challenge through a concentrated investigation of digital health, Artificial Intelligence, and specifically Machine Learning and Natural Language Processing's (NLP) potential to expedite systematic literature reviews and refine the knowledge extraction process. The surge of COVID-19 complicated the efforts of scientists, policymakers, and medical professionals in identifying pertinent articles and assessing their scientific validity. This thesis presents a substantial solution in the form of the COKE Project, an initiative that interlaces machine reading with the rigorous protocols of Evidence-Based Medicine to streamline knowledge extraction. In the framework of the COKE (“COVID-19 Knowledge Extraction framework for next-generation discovery science”) Project, this thesis aims to underscore the capacity of machine reading to create knowledge graphs from scientific texts. The project is remarkable for its innovative use of NLP techniques such as a BERT + bi-LSTM language model. This combination is employed to detect and categorize elements within medical abstracts, thereby enhancing the systematic literature review process. The COKE project's outcomes show that NLP, when used in a judiciously structured manner, can significantly reduce the time and effort required to produce medical guidelines. These findings are particularly salient during times of medical emergency, like the COVID-19 pandemic, when quick and accurate research results are critical.
Resumo:
Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings.