125 resultados para cog humanoid robot embodied learning phd thesis metaphor pancake reaching vision
Resumo:
The first mechanical Automaton concept was found in a Chinese text written in the 3rd century BC, while Computer Vision was born in the late 1960s. Therefore, visual perception applied to machines (i.e. the Machine Vision) is a young and exciting alliance. When robots came in, the new field of Robotic Vision was born, and these terms began to be erroneously interchanged. In short, we can say that Machine Vision is an engineering domain, which concern the industrial use of Vision. The Robotic Vision, instead, is a research field that tries to incorporate robotics aspects in computer vision algorithms. Visual Servoing, for example, is one of the problems that cannot be solved by computer vision only. Accordingly, a large part of this work deals with boosting popular Computer Vision techniques by exploiting robotics: e.g. the use of kinematics to localize a vision sensor, mounted as the robot end-effector. The remainder of this work is dedicated to the counterparty, i.e. the use of computer vision to solve real robotic problems like grasping objects or navigate avoiding obstacles. Will be presented a brief survey about mapping data structures most widely used in robotics along with SkiMap, a novel sparse data structure created both for robotic mapping and as a general purpose 3D spatial index. Thus, several approaches to implement Object Detection and Manipulation, by exploiting the aforementioned mapping strategies, will be proposed, along with a completely new Machine Teaching facility in order to simply the training procedure of modern Deep Learning networks.
Resumo:
Le tecniche di Machine Learning sono molto utili in quanto consento di massimizzare l’utilizzo delle informazioni in tempo reale. Il metodo Random Forests può essere annoverato tra le tecniche di Machine Learning più recenti e performanti. Sfruttando le caratteristiche e le potenzialità di questo metodo, la presente tesi di dottorato affronta due casi di studio differenti; grazie ai quali è stato possibile elaborare due differenti modelli previsionali. Il primo caso di studio si è incentrato sui principali fiumi della regione Emilia-Romagna, caratterizzati da tempi di risposta molto brevi. La scelta di questi fiumi non è stata casuale: negli ultimi anni, infatti, in detti bacini si sono verificati diversi eventi di piena, in gran parte di tipo “flash flood”. Il secondo caso di studio riguarda le sezioni principali del fiume Po, dove il tempo di propagazione dell’onda di piena è maggiore rispetto ai corsi d’acqua del primo caso di studio analizzato. Partendo da una grande quantità di dati, il primo passo è stato selezionare e definire i dati in ingresso in funzione degli obiettivi da raggiungere, per entrambi i casi studio. Per l’elaborazione del modello relativo ai fiumi dell’Emilia-Romagna, sono stati presi in considerazione esclusivamente i dati osservati; a differenza del bacino del fiume Po in cui ai dati osservati sono stati affiancati anche i dati di previsione provenienti dalla catena modellistica Mike11 NAM/HD. Sfruttando una delle principali caratteristiche del metodo Random Forests, è stata stimata una probabilità di accadimento: questo aspetto è fondamentale sia nella fase tecnica che in fase decisionale per qualsiasi attività di intervento di protezione civile. L'elaborazione dei dati e i dati sviluppati sono stati effettuati in ambiente R. Al termine della fase di validazione, gli incoraggianti risultati ottenuti hanno permesso di inserire il modello sviluppato nel primo caso studio all’interno dell’architettura operativa di FEWS.
Resumo:
In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys- tem can be reified into actual software technology and extended towards many DS-related directions.
Resumo:
Recent scholarly works on the relationship between ‘fashion’ and ‘sustainability’ have identified a need for a systemic transition towards fashion media ‘for sustaianbility’. Nevertheless, the academic research on the topic is still limited and rather circumscribed to the analysis of marketing practices, while only recently some more systemic and critical analyses of the symbolic production of sustainability through fashion media have been undertaken. Responding to this need for an in-depth investigation of ‘sustainability’-related media production, my research focuses on the ‘fashion sustainability’-related discursive formations in the context of one of the most influential fashion magazines today – Vogue Italia. In order to investigate the ways in which the ‘sustainability’ discourse was formed and has evolved, the study considered the entire Vogue Italia archive from 1965 to 2021. The data collection was carried out in two phases, and the individualised relevant discursive units were then in-depth and critically analysed to allow for a grounded assessment of the media giant’s position. The Discourse-Historical Approach provided a methodological base for the analysis, which took into consideration the various levels of context: the immediate textual and intertextual, but also the broader socio-cultural context of the predominant, over-production oriented and capital-led fashion system. The findings led to a delineation of the evolution of the ‘fashion sustainability’ discourse, unveiling how despite Vogue Italia’s auto-determination as attentive to ‘sustainability’-related topics, the magazine is systemically employing discursive strategies which significantly mitigate the meaning of the ‘sustainable commitment’ and thus the meaning of ‘fashion sustainability’.
Resumo:
In recent decades, two prominent trends have influenced the data modeling field, namely network analysis and machine learning. This thesis explores the practical applications of these techniques within the domain of drug research, unveiling their multifaceted potential for advancing our comprehension of complex biological systems. The research undertaken during this PhD program is situated at the intersection of network theory, computational methods, and drug research. Across six projects presented herein, there is a gradual increase in model complexity. These projects traverse a diverse range of topics, with a specific emphasis on drug repurposing and safety in the context of neurological diseases. The aim of these projects is to leverage existing biomedical knowledge to develop innovative approaches that bolster drug research. The investigations have produced practical solutions, not only providing insights into the intricacies of biological systems, but also allowing the creation of valuable tools for their analysis. In short, the achievements are: • A novel computational algorithm to identify adverse events specific to fixed-dose drug combinations. • A web application that tracks the clinical drug research response to SARS-CoV-2. • A Python package for differential gene expression analysis and the identification of key regulatory "switch genes". • The identification of pivotal events causing drug-induced impulse control disorders linked to specific medications. • An automated pipeline for discovering potential drug repurposing opportunities. • The creation of a comprehensive knowledge graph and development of a graph machine learning model for predictions. Collectively, these projects illustrate diverse applications of data science and network-based methodologies, highlighting the profound impact they can have in supporting drug research activities.
Resumo:
This PhD Thesis is composed of three chapters, each discussing a specific type of risk that banks face. The first chapter talks about Systemic Risk and how banks get exposed to it through the Interbank Funding Market. Exposures in the said market have Systemic Risk implications because the market creates linkages, where the failure of one party can affect the others in the market. By showing that CDS Spreads, as bank risk indicators, are positively related to their Net Interbank Funding Market Exposures, this chapter establishes the above Systemic Risk Implications of Interbank Funding. Meanwhile, the second chapter discusses how banks may handle Illiquidity Risk, defined as the possibility of having sudden funding needs. Illiquidity Risk is embodied in this chapter through Loan Commitments as they oblige banks to lend to its clients, up to a certain amount of funds at any time. This chapter points out that using Securitization as funding facility, could allow the banks to manage this Illiquidity Risk. To make this case, this chapter demonstrates empirically that banks having an increase in Loan Commitments, may experience an increase in risk profile but such can be offset by an accompanying increase in Securitization Activity. Lastly, the third chapter focuses on how banks manage Credit Risk also through Securitization. Securitization has a Credit Risk management property by allowing the offloading of risk. This chapter investigates how banks use such property by looking at the effect of securitization on the banks’ loan portfolios and overall risk and returns. The findings are that securitization is positively related to loan portfolio size and the portfolio share of risky loans, which translates to higher risk and returns. Thus, this chapter points out that Credit Risk management through Securitization may be have been done towards higher risk taking for high returns.
Resumo:
Fear conditioning represents the learning process by which a stimulus, after repeated pairing with an aversive event, comes to evoke fear and becomes intrinsically aversive. This learning is essential to organisms throughout the animal kingdom and represents one the most successful laboratory paradigm to reveal the psychological processes that govern the expression of emotional memory and explore its neurobiological underpinnings. Although a large amount of research has been conducted on the behavioural or neural correlates of fear conditioning, some key questions remain unanswered. Accordingly, this thesis aims to respond to some unsolved theoretic and methodological issues, thus furthering our understanding of the neurofunctional basis of human fear conditioning both in healthy and brain-damaged individuals. Specifically, in this thesis, behavioural, psychophysiological, lesion and non-invasive brain stimulation studies were reported. Study 1 examined the influence of normal aging on context-dependent recall of extinction of fear conditioned stimulus. Study 2 aimed to determine the causal role of the ventromedial PFC (vmPFC) in the acquisition of fear conditioning by systematically test the effect of bilateral vmPFC brain-lesion. Study 3 aimed to interfere with the reconsolidation process of fear memory by the means of non-invasive brain stimulation (i.e. TMS) disrupting PFC neural activity. Finally, Study 4 aimed to investigate whether the parasympathetic – vagal – modulation of heart rate might reflect the anticipation of fearful, as compared to neutral, events during classical fear conditioning paradigm. Evidence reported in this PhD thesis might therefore provide key insights and deeper understanding of critical issues concerning the neurofunctional mechanisms underlying the acquisition, the extinction and the reconsolidation of fear memories in humans.
Resumo:
Expandable prostheses are becoming increasingly popular in the reconstruction of children with bone sarcomas of the lower limb. Since the introduction of effective chemotherapy in the treatment of these pathologies, in the 70s, there has been need for new limb salvage techniques. In children, limb salvage of the lower limbs is particularly challenging, not in the last place, because of the loss of growth potential. Therefore, expandable prostheses have been developed. However, the first experiences with these implants were not very successful. High complication rates and unpredictable outcomes raised major concerns on this innovative type of reconstruction. The rarity of the indication is one of the main reasons why there has been a relatively slow learning curve and implant development regarding this type of prosthesis. This PhD thesis, gives an overview of the introduction, the development, the current standards, and the future perspectives of expandable prostheses for the reconstruction of the distal femur in children.
Resumo:
This PhD thesis investigates children’s peer practices in two primary schools in Italy, focusing on the ordinary and the Italian L2 classroom. The study is informed by the paradigm of language socialization and considers peer interactions as a ‘double opportunity space’, allowing both children’s co-construction of their social organization and children’s sociolinguistic development. These two foci of attention are explored on the basis of children’s social interaction and of the verbal, embodied, and material resources that children agentively deploy during their mundane activities in the peer group. The study is based on a video ethnography that lasted nine months. Approximately 30 hours of classroom interactions were video-recorded, transcribed, and analyzed with an approach that combines the micro-analytic instruments of Conversation Analysis and the use of ethnographic information. Three main social phenomena were selected for analysis: (a) children’s enactment of the role of the teacher, (b) children’s reproduction of must-formatted rules, and (c) children’s argumentative strategies during peer conflict. The analysis highlights the centrality of the institutional frame for children’s peer interactions in the classroom. Moreover, the study illustrates that children socialize their classmates to the linguistic, social, and moral expectations of the context in and through various practices. Notably, these practices are also germane to the local negotiation of children’s social organization and hierarchy. Therefore, the thesis underlines that children’s peer interactions are both a resource for children’s sociolinguistic development and a potentially problematic locus where social exclusion is constructed and brought to bear. These insights are relevant for teachers’ professional practice. Children’s peer interactions are a resource that can be integrated in everyday didactics. Nevertheless, the role of the teacher in supervising and steering children’s peer practices appears crucial: an acritical view of children’s autonomous work, often implied in teaching methods such as peer tutoring, needs to be problematized.
Resumo:
The advent of omic data production has opened many new perspectives in the quest for modelling complexity in biophysical systems. With the capability of characterizing a complex organism through the patterns of its molecular states, observed at different levels through various omics, a new paradigm of investigation is arising. In this thesis, we investigate the links between perturbations of the human organism, described as the ensemble of crosstalk of its molecular states, and health. Machine learning plays a key role within this picture, both in omic data analysis and model building. We propose and discuss different frameworks developed by the author using machine learning for data reduction, integration, projection on latent features, pattern analysis, classification and clustering of omic data, with a focus on 1H NMR metabolomic spectral data. The aim is to link different levels of omic observations of molecular states, from nanoscale to macroscale, to study perturbations such as diseases and diet interpreted as changes in molecular patterns. The first part of this work focuses on the fingerprinting of diseases, linking cellular and systemic metabolomics with genomic to asses and predict the downstream of perturbations all the way down to the enzymatic network. The second part is a set of frameworks and models, developed with 1H NMR metabolomic at its core, to study the exposure of the human organism to diet and food intake in its full complexity, from epidemiological data analysis to molecular characterization of food structure.
Resumo:
The recent widespread use of social media platforms and web services has led to a vast amount of behavioral data that can be used to model socio-technical systems. A significant part of this data can be represented as graphs or networks, which have become the prevalent mathematical framework for studying the structure and the dynamics of complex interacting systems. However, analyzing and understanding these data presents new challenges due to their increasing complexity and diversity. For instance, the characterization of real-world networks includes the need of accounting for their temporal dimension, together with incorporating higher-order interactions beyond the traditional pairwise formalism. The ongoing growth of AI has led to the integration of traditional graph mining techniques with representation learning and low-dimensional embeddings of networks to address current challenges. These methods capture the underlying similarities and geometry of graph-shaped data, generating latent representations that enable the resolution of various tasks, such as link prediction, node classification, and graph clustering. As these techniques gain popularity, there is even a growing concern about their responsible use. In particular, there has been an increased emphasis on addressing the limitations of interpretability in graph representation learning. This thesis contributes to the advancement of knowledge in the field of graph representation learning and has potential applications in a wide range of complex systems domains. We initially focus on forecasting problems related to face-to-face contact networks with time-varying graph embeddings. Then, we study hyperedge prediction and reconstruction with simplicial complex embeddings. Finally, we analyze the problem of interpreting latent dimensions in node embeddings for graphs. The proposed models are extensively evaluated in multiple experimental settings and the results demonstrate their effectiveness and reliability, achieving state-of-the-art performances and providing valuable insights into the properties of the learned representations.
Resumo:
The continuous and swift progression of both wireless and wired communication technologies in today's world owes its success to the foundational systems established earlier. These systems serve as the building blocks that enable the enhancement of services to cater to evolving requirements. Studying the vulnerabilities of previously designed systems and their current usage leads to the development of new communication technologies replacing the old ones such as GSM-R in the railway field. The current industrial research has a specific focus on finding an appropriate telecommunication solution for railway communications that will replace the GSM-R standard which will be switched off in the next years. Various standardization organizations are currently exploring and designing a radiofrequency technology based standard solution to serve railway communications in the form of FRMCS (Future Railway Mobile Communication System) to substitute the current GSM-R. Bearing on this topic, the primary strategic objective of the research is to assess the feasibility to leverage on the current public network technologies such as LTE to cater to mission and safety critical communication for low density lines. The research aims to identify the constraints, define a service level agreement with telecom operators, and establish the necessary implementations to make the system as reliable as possible over an open and public network, while considering safety and cybersecurity aspects. The LTE infrastructure would be utilized to transmit the vital data for the communication of a railway system and to gather and transmit all the field measurements to the control room for maintenance purposes. Given the significance of maintenance activities in the railway sector, the ongoing research includes the implementation of a machine learning algorithm to detect railway equipment faults, reducing time and human analysis errors due to the large volume of measurements from the field.
Resumo:
The Cherenkov Telescope Array (CTA) will be the next-generation ground-based observatory to study the universe in the very-high-energy domain. The observatory will rely on a Science Alert Generation (SAG) system to analyze the real-time data from the telescopes and generate science alerts. The SAG system will play a crucial role in the search and follow-up of transients from external alerts, enabling multi-wavelength and multi-messenger collaborations. It will maximize the potential for the detection of the rarest phenomena, such as gamma-ray bursts (GRBs), which are the science case for this study. This study presents an anomaly detection method based on deep learning for detecting gamma-ray burst events in real-time. The performance of the proposed method is evaluated and compared against the Li&Ma standard technique in two use cases of serendipitous discoveries and follow-up observations, using short exposure times. The method shows promising results in detecting GRBs and is flexible enough to allow real-time search for transient events on multiple time scales. The method does not assume background nor source models and doe not require a minimum number of photon counts to perform analysis, making it well-suited for real-time analysis. Future improvements involve further tests, relaxing some of the assumptions made in this study as well as post-trials correction of the detection significance. Moreover, the ability to detect other transient classes in different scenarios must be investigated for completeness. The system can be integrated within the SAG system of CTA and deployed on the onsite computing clusters. This would provide valuable insights into the method's performance in a real-world setting and be another valuable tool for discovering new transient events in real-time. Overall, this study makes a significant contribution to the field of astrophysics by demonstrating the effectiveness of deep learning-based anomaly detection techniques for real-time source detection in gamma-ray astronomy.
Resumo:
The design process of any electric vehicle system has to be oriented towards the best energy efficiency, together with the constraint of maintaining comfort in the vehicle cabin. Main aim of this study is to research the best thermal management solution in terms of HVAC efficiency without compromising occupant’s comfort and internal air quality. An Arduino controlled Low Cost System of Sensors was developed and compared against reference instrumentation (average R-squared of 0.92) and then used to characterise the vehicle cabin in real parking and driving conditions trials. Data on the energy use of the HVAC was retrieved from the car On-Board Diagnostic port. Energy savings using recirculation can reach 30 %, but pollutants concentration in the cabin builds up in this operating mode. Moreover, the temperature profile appeared strongly nonuniform with air temperature differences up to 10° C. Optimisation methods often require a high number of runs to find the optimal configuration of the system. Fast models proved to be beneficial for these task, while CFD-1D model are usually slower despite the higher level of detail provided. In this work, the collected dataset was used to train a fast ML model of both cabin and HVAC using linear regression. Average scaled RMSE over all trials is 0.4 %, while computation time is 0.0077 ms for each second of simulated time on a laptop computer. Finally, a reinforcement learning environment was built in OpenAI and Stable-Baselines3 using the built-in Proximal Policy Optimisation algorithm to update the policy and seek for the best compromise between comfort, air quality and energy reward terms. The learning curves show an oscillating behaviour overall, with only 2 experiments behaving as expected even if too slow. This result leaves large room for improvement, ranging from the reward function engineering to the expansion of the ML model.
Resumo:
Values are beliefs or principles that are deemed significant or desirable within a specific society or culture, serving as the fundamental underpinnings for ethical and socio-behavioral norms. The objective of this research is to explore the domain encompassing moral, cultural, and individual values. To achieve this, we employ an ontological approach to formally represent the semantic relations within the value domain. The theoretical framework employed adopts Fillmore’s frame semantics, treating values as semantic frames. A value situation is thus characterized by the co-occurrence of specific semantic roles fulfilled within a given event or circumstance. Given the intricate semantics of values as abstract entities with high social capital, our investigation extends to two interconnected domains. The first domain is embodied cognition, specifically image schemas, which are cognitive patterns derived from sensorimotor experiences that shape our conceptualization of entities in the world. The second domain pertains to emotions, which are inherently intertwined with the realm of values. Consequently, our approach endeavors to formalize the semantics of values within an embodied cognition framework, recognizing values as emotional-laden semantic frames. The primary ontologies proposed in this work are: (i) ValueNet, an ontology network dedicated to the domain of values; (ii) ISAAC, the Image Schema Abstraction And Cognition ontology; and (iii) EmoNet, an ontology for theories of emotions. The knowledge formalization adheres to established modeling practices, including the reuse of semantic web resources such as WordNet, VerbNet, FrameNet, DBpedia, and alignment to foundational ontologies like DOLCE, as well as the utilization of Ontology Design Patterns. These ontological resources are operationalized through the development of a fully explainable frame-based detector capable of identifying values, emotions, and image schemas generating knowledge graphs from from natural language, leveraging the semantic dependencies of a sentence, and allowing non trivial higher layer knowledge inferences.