923 resultados para Graph matching
Resumo:
Water Distribution Networks (WDNs) play a vital importance rule in communities, ensuring well-being band supporting economic growth and productivity. The need for greater investment requires design choices will impact on the efficiency of management in the coming decades. This thesis proposes an algorithmic approach to address two related problems:(i) identify the fundamental asset of large WDNs in terms of main infrastructure;(ii) sectorize large WDNs into isolated sectors in order to respect the minimum service to be guaranteed to users. Two methodologies have been developed to meet these objectives and subsequently they were integrated to guarantee an overall process which allows to optimize the sectorized configuration of WDN taking into account the needs to integrated in a global vision the two problems (i) and (ii). With regards to the problem (i), the methodology developed introduces the concept of primary network to give an answer with a dual approach, of connecting main nodes of WDN in terms of hydraulic infrastructures (reservoirs, tanks, pumps stations) and identifying hypothetical paths with the minimal energy losses. This primary network thus identified can be used as an initial basis to design the sectors. The sectorization problem (ii) has been faced using optimization techniques by the development of a new dedicated Tabu Search algorithm able to deal with real case studies of WDNs. For this reason, three new large WDNs models have been developed in order to test the capabilities of the algorithm on different and complex real cases. The developed methodology also allows to automatically identify the deficient parts of the primary network and dynamically includes new edges in order to support a sectorized configuration of the WDN. The application of the overall algorithm to the new real case studies and to others from literature has given applicable solutions even in specific complex situations.
Resumo:
The recent widespread use of social media platforms and web services has led to a vast amount of behavioral data that can be used to model socio-technical systems. A significant part of this data can be represented as graphs or networks, which have become the prevalent mathematical framework for studying the structure and the dynamics of complex interacting systems. However, analyzing and understanding these data presents new challenges due to their increasing complexity and diversity. For instance, the characterization of real-world networks includes the need of accounting for their temporal dimension, together with incorporating higher-order interactions beyond the traditional pairwise formalism. The ongoing growth of AI has led to the integration of traditional graph mining techniques with representation learning and low-dimensional embeddings of networks to address current challenges. These methods capture the underlying similarities and geometry of graph-shaped data, generating latent representations that enable the resolution of various tasks, such as link prediction, node classification, and graph clustering. As these techniques gain popularity, there is even a growing concern about their responsible use. In particular, there has been an increased emphasis on addressing the limitations of interpretability in graph representation learning. This thesis contributes to the advancement of knowledge in the field of graph representation learning and has potential applications in a wide range of complex systems domains. We initially focus on forecasting problems related to face-to-face contact networks with time-varying graph embeddings. Then, we study hyperedge prediction and reconstruction with simplicial complex embeddings. Finally, we analyze the problem of interpreting latent dimensions in node embeddings for graphs. The proposed models are extensively evaluated in multiple experimental settings and the results demonstrate their effectiveness and reliability, achieving state-of-the-art performances and providing valuable insights into the properties of the learned representations.
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
In this work, a prospective study conducted at the IRCCS Istituto delle Scienze Neurologiche di Bologna is presented. The aim was to investigate the brain functional connectivity of a cohort of patients (N=23) suffering from persistent olfactory dysfunction after SARS-CoV-2 infection (Post-COVID-19 syndrome), as compared to a matching group of healthy controls (N=26). In particular, starting from individual resting state functional-MRI data, different analytical approaches were adopted in order to find potential alterations in the connectivity patterns of patients’ brains. Analyses were conducted both at a whole-brain level and with a special focus on brain regions involved in the processing of olfactory stimuli (Olfactory Network). Statistical correlations between functional connectivity alterations and the results of olfactory and neuropsychological tests were investigated, to explore the associations with cognitive processes. The three approaches implemented for the analysis were the seed-based correlation analysis, the group-level Independent Component analysis and a graph-theoretical analysis of brain connectivity. Due to the relative novelty of such approaches, many implementation details and methodologies are not standardized yet and represent active research fields. Seed-based and group-ICA analyses’ results showed no statistically significant differences between groups, while relevant alterations emerged from those of the graph-based analysis. In particular, patients’ olfactory sub-graph appeared to have a less pronounced modular structure compared to the control group; locally, a hyper-connectivity of the right thalamus was observed in patients, with significant involvement of the right insula and hippocampus. Results of an exploratory correlation analysis showed a positive correlation between the graphs global modularity and the scores obtained in olfactory tests and negative correlations between the thalamus hyper-connectivity and memory tests scores.
Resumo:
Poset associahedra are a family of convex polytopes recently introduced by Pavel Galashin in 2021. The associahedron An is an (n-2)-dimensional convex polytope whose facial structure encodes the ways of parenthesizing an n-letter word (among several equivalent combinatorial objects). Associahedra are deeply studied polytopes that appear naturally in many areas of mathematics: algebra, combinatorics, geometry, topology... They have many presentations and generalizations. One of their incarnations is as a compactification of the configuration space of n points on a line. Similarly, the P-associahedron of a poset P is a compactification of the configuration space of order preserving maps from P to R. Galashin presents poset associahedra as combinatorial objects and shows that they can be realized as convex polytopes. However, his proof is not constructive, in the sense that no explicit coordinates are provided. The main goal of this thesis is to provide an explicit construction of poset associahedra as sections of graph associahedra, thus solving the open problem stated in Remark 1.5 of Galashin's paper.
Resumo:
La seguente tesi propone un’introduzione al geometric deep learning. Nella prima parte vengono presentati i concetti principali di teoria dei grafi ed introdotta una dinamica di diffusione su grafo, in analogia con l’equazione del calore. A seguire, iniziando dal linear classifier verranno introdotte le architetture che hanno portato all’ideazione delle graph convolutional networks. In conclusione, si analizzano esempi di alcuni algoritmi utilizzati nel geometric deep learning e si mostra una loro implementazione sul Cora dataset, un insieme di dati con struttura a grafo.
Resumo:
The Neural Networks customized and tested in this thesis (WaldoNet, FlowNet and PatchNet) are a first exploration and approach to the Template Matching task. The possibilities of extension are therefore many and some are proposed below. During my thesis, I have analyzed the functioning of the classical algorithms and adapted with deep learning algorithms. The features extracted from both the template and the query images resemble the keypoints of the SIFT algorithm. Then, instead of similarity function or keypoints matching, WaldoNet and PatchNet use the convolutional layer to compare the features, while FlowNet uses the correlational layer. In addition, I have identified the major challenges of the Template Matching task (affine/non-affine transformations, intensity changes...) and solved them with a careful design of the dataset.
Resumo:
Depth estimation from images has long been regarded as a preferable alternative compared to expensive and intrusive active sensors, such as LiDAR and ToF. The topic has attracted the attention of an increasingly wide audience thanks to the great amount of application domains, such as autonomous driving, robotic navigation and 3D reconstruction. Among the various techniques employed for depth estimation, stereo matching is one of the most widespread, owing to its robustness, speed and simplicity in setup. Recent developments has been aided by the abundance of annotated stereo images, which granted to deep learning the opportunity to thrive in a research area where deep networks can reach state-of-the-art sub-pixel precision in most cases. Despite the recent findings, stereo matching still begets many open challenges, two among them being finding pixel correspondences in presence of objects that exhibits a non-Lambertian behaviour and processing high-resolution images. Recently, a novel dataset named Booster, which contains high-resolution stereo pairs featuring a large collection of labeled non-Lambertian objects, has been released. The work shown that training state-of-the-art deep neural network on such data improves the generalization capabilities of these networks also in presence of non-Lambertian surfaces. Regardless being a further step to tackle the aforementioned challenge, Booster includes a rather small number of annotated images, and thus cannot satisfy the intensive training requirements of deep learning. This thesis work aims to investigate novel view synthesis techniques to augment the Booster dataset, with ultimate goal of improving stereo matching reliability in presence of high-resolution images that displays non-Lambertian surfaces.
Resumo:
This thesis contributes to the ArgMining 2021 shared task on Key Point Analysis. Key Point Analysis entails extracting and calculating the prevalence of a concise list of the most prominent talking points, from an input corpus. These talking points are usually referred to as key points. Key point analysis is divided into two subtasks: Key Point Matching, which involves assigning a matching score to each key point/argument pair, and Key Point Generation, which consists of the generation of key points. The task of Key Point Matching was approached using different models: a pretrained Sentence Transformers model and a tree-constrained Graph Neural Network were tested. The best model was the fine-tuned Sentence Transformers, which achieved a mean Average Precision score of 0.75, ranking 12 compared to other participating teams. The model was then used for the subtask of Key Point Generation using the extractive method in the selection of key point candidates and the model developed for the previous subtask to evaluate them.
Resumo:
La Stereo Vision è un popolare argomento di ricerca nel campo della Visione Artificiale; esso consiste nell’usare due immagini di una stessa scena,prodotte da due fotocamere diverse, per estrarre informazioni in 3D. L’idea di base della Stereo Vision è la simulazione della visione binoculare umana:le due fotocamere sono disposte in orizzontale per fungere da “occhi” che guardano la scena in 3D. Confrontando le due immagini ottenute, si possono ottenere informazioni riguardo alle posizioni degli oggetti della scena.In questa relazione presenteremo un algoritmo di Stereo Vision: si tratta di un algoritmo parallelo che ha come obiettivo di tracciare le linee di livello di un area geografica. L’algoritmo in origine era stato implementato per la Connection Machine CM-2, un supercomputer sviluppato negli anni 80, ed era espresso in *Lisp, un linguaggio derivato dal Lisp e ideato per la macchina stessa. Questa relazione tratta anche la traduzione e l’implementazione dell’algoritmo in CUDA, ovvero un’architettura hardware per l’elaborazione pa- rallela sviluppata da NVIDIA, che consente di eseguire codice parallelo su GPU. Si darà inoltre uno sguardo alle difficoltà che sono state riscontrate nella traduzione da *Lisp a CUDA.
Resumo:
The study of the user scheduling problem in a Low Earth Orbit (LEO) Multi-User MIMO system is the objective of this thesis. With the application of cutting-edge digital beamforming algorithms, a LEO satellite with an antenna array and a large number of antenna elements can provide service to many user terminals (UTs) in full frequency reuse (FFR) schemes. Since the number of UTs on-ground are many more than the transmit antennas on the satellite, user scheduling is necessary. Scheduling can be accomplished by grouping users into different clusters: users within the same cluster are multiplexed and served together via Space Division Multiple Access (SDMA), i.e., digital beamforming or Multi-User MIMO techniques; the different clusters of users are then served on different time slots via Time Division Multiple Access (TDMA). The design of an optimal user grouping strategy is known to be an NP-complete problem which can be solved only through exhaustive search. In this thesis, we provide a graph-based user scheduling and feed space beamforming architecture for the downlink with the aim of reducing user inter-beam interference. The main idea is based on clustering users whose pairwise great-circle distance is as large as possible. First, we create a graph where the users represent the vertices, whereas an edge in the graph between 2 users exists if their great-circle distance is above a certain threshold. In the second step, we develop a low complex greedy user clustering technique and we iteratively search for the maximum clique in the graph, i.e., the largest fully connected subgraph in the graph. Finally, by using the 3 aforementioned power normalization techniques, a Minimum Mean Square Error (MMSE) beamforming matrix is deployed on a cluster basis. The suggested scheduling system is compared with a position-based scheduler, which generates a beam lattice on the ground and randomly selects one user per beam to form a cluster.
Resumo:
Alloimmunisation is a major complication in patients with sickle cell disease (SCD) receiving red blood cell (RBC) transfusions and despite provision of Rh phenotyped RBC units, Rh antibodies still occur. These antibodies in patients positive for the corresponding Rh antigen are considered autoantibodies in many cases but variant RH alleles found in SCD patients can also contribute to Rh alloimmunisation. In this study, we characterised variant RH alleles in 31 SCD patients who made antibodies to Rh antigens despite antigen-positive status and evaluated the clinical significance of the antibodies produced. RHD and RHCE BeadChip™ from BioArray Solutions and/or amplification and sequencing of exons were used to identify the RH variants. The serological features of all Rh antibodies in antigen-positive patients were analysed and the clinical significance of the antibodies was evaluated by retrospective analysis of the haemoglobin (Hb) levels before and after transfusion; the change from baseline pre-transfusion Hb and the percentage of HbS were also determined. We identified variant RH alleles in 31/48 (65%) of SCD patients with Rh antibodies. Molecular analyses revealed the presence of partial RHD alleles and variant RHCE alleles associated with altered C and e antigens. Five patients were compound heterozygotes for RHD and RHCE variants. Retrospective analysis showed that 42% of antibodies produced by the patients with RH variants were involved in delayed haemolytic transfusion reactions or decreased survival of transfused RBC. In this study, we found that Rh antibodies in SCD patients with RH variants can be clinically significant and, therefore, matching patients based on RH variants should be considered.
Resumo:
This paper argues in favor of a concord features valuing within the DP in terms of the Agree operation (Chomsky, 1999), with no recourse to any other mechanism. I show that Agree accounts for feature valuing both in the sentence level as well as in the DP, contrary to Chomsky's (1999) suggestion that concord in DP should involve some other checking mechanism.
Resumo:
PURPOSE: To compare the 2% ibopamine provocative test with the water drinking test as a provocative test for glaucoma. METHODS: Primary open-angle glaucoma patients and normal individuals were selected from CEROF-Universidade Federal de Goiânia UFG, and underwent the 2% ibopamine provocative test and the water drinking test in a randomized fashion, at least 1 week apart. Intraocular pressure (IOP) before and after both tests, Bland-Altman graph, sensitivity and specificity (as mesured by ROC curves) were obtained for both methods. RESULTS: Forty-seven eyes from 25 patients were included (27 eyes from 15 glaucoma patients and 20 eyes from 10 normal individuals), with a mean age of 54.2 ± 12.7 years. The mean MD of glaucoma patients was -2.8 ± 2.11 dB. There was no statistically difference in the baseline IOP (p=0.8) comparing glaucoma patients, but positive after the provocative tests (p=0.03), and in the IOP variation (4.4 ± 1.3 mmHg for ibopamine and 3.2 ± 2.2 mmHg for water drinking test, p=0.01). There was no difference in all studied parameters for normal individuals. The Bland-Altman graph showed high dispersion comparing both methods. The areas under the ROC curve were 0.987 for the ibopamine provocative test, and 0.807 for the water-drinking test. CONCLUSION: In this selected subgroup of glaucoma patients with early visual field defect, the ibopamine provocative test has shown better sensitivity/specificity than the water drinking test.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física