927 resultados para Graph DBMS, BenchMarking, OLAP, NoSQL
Resumo:
Poset associahedra are a family of convex polytopes recently introduced by Pavel Galashin in 2021. The associahedron An is an (n-2)-dimensional convex polytope whose facial structure encodes the ways of parenthesizing an n-letter word (among several equivalent combinatorial objects). Associahedra are deeply studied polytopes that appear naturally in many areas of mathematics: algebra, combinatorics, geometry, topology... They have many presentations and generalizations. One of their incarnations is as a compactification of the configuration space of n points on a line. Similarly, the P-associahedron of a poset P is a compactification of the configuration space of order preserving maps from P to R. Galashin presents poset associahedra as combinatorial objects and shows that they can be realized as convex polytopes. However, his proof is not constructive, in the sense that no explicit coordinates are provided. The main goal of this thesis is to provide an explicit construction of poset associahedra as sections of graph associahedra, thus solving the open problem stated in Remark 1.5 of Galashin's paper.
Resumo:
La seguente tesi propone un’introduzione al geometric deep learning. Nella prima parte vengono presentati i concetti principali di teoria dei grafi ed introdotta una dinamica di diffusione su grafo, in analogia con l’equazione del calore. A seguire, iniziando dal linear classifier verranno introdotte le architetture che hanno portato all’ideazione delle graph convolutional networks. In conclusione, si analizzano esempi di alcuni algoritmi utilizzati nel geometric deep learning e si mostra una loro implementazione sul Cora dataset, un insieme di dati con struttura a grafo.
Resumo:
Gli obiettivi di questi tesi sono lo studio comparativo di alcuni DBMS non relazionali e il confronto di diverse soluzioni di modellazione logica e fisica per database non relazionali. Utilizzando come sistemi di gestione due DBMS Document-based non relazionali, MongoDB e CouchDB, ed un DBMS relazionale, Oracle, sarà effettuata un’analisi di diverse soluzione di modellazione logica dei dati in database documentali e uno studio mirato alla scelta degli attributi sui quali costruire indici. In primo luogo verrà definito un semplice caso di studio su cui effettuare i confronto, basato su due entità in relazione 1:N, sulle quali sarà costruito un opportuno carico di lavoro. Idatabase non relazionali sono schema-less, senza schema fisso, ed esiste una libertà maggiore di modellazione. In questo lavoro di tesi i dati verranno modellati secondo le tecniche del Referencing ed Embedding che consistono rispettivamente nell’inserimento di una chiave (riferimento) oppure di un intero sotto-documento (embedding) all’interno di un documento per poter esprimere il concetto di relazione tra diverse entità. Per studiare l’opportunità di indicizzare un attributo, ciascuna entità sarà poi composta da due triplette uguali di attributi definiti con differenti livelli di selettività, con la differenza che su ciascun attributo della seconda sarà costruito un indice. Il carico di lavoro sarà costituito da query definite in modo da poter testare le diverse modellazioni includendo anche predicati di join che non sono solitamente contemplati in modelli documentali. Per ogni tipo di database verranno eseguite le query e registrati i tempi, in modo da poter confrontare le performance dei diversi DBMS sulla base delle operazioni CRUD.
Resumo:
Artificial Intelligence is reshaping the field of fashion industry in different ways. E-commerce retailers exploit their data through AI to enhance their search engines, make outfit suggestions and forecast the success of a specific fashion product. However, it is a challenging endeavour as the data they possess is huge, complex and multi-modal. The most common way to search for fashion products online is by matching keywords with phrases in the product's description which are often cluttered, inadequate and differ across collections and sellers. A customer may also browse an online store's taxonomy, although this is time-consuming and doesn't guarantee relevant items. With the advent of Deep Learning architectures, particularly Vision-Language models, ad-hoc solutions have been proposed to model both the product image and description to solve this problems. However, the suggested solutions do not exploit effectively the semantic or syntactic information of these modalities, and the unique qualities and relations of clothing items. In this work of thesis, a novel approach is proposed to address this issues, which aims to model and process images and text descriptions as graphs in order to exploit the relations inside and between each modality and employs specific techniques to extract syntactic and semantic information. The results obtained show promising performances on different tasks when compared to the present state-of-the-art deep learning architectures.
Resumo:
The study of the user scheduling problem in a Low Earth Orbit (LEO) Multi-User MIMO system is the objective of this thesis. With the application of cutting-edge digital beamforming algorithms, a LEO satellite with an antenna array and a large number of antenna elements can provide service to many user terminals (UTs) in full frequency reuse (FFR) schemes. Since the number of UTs on-ground are many more than the transmit antennas on the satellite, user scheduling is necessary. Scheduling can be accomplished by grouping users into different clusters: users within the same cluster are multiplexed and served together via Space Division Multiple Access (SDMA), i.e., digital beamforming or Multi-User MIMO techniques; the different clusters of users are then served on different time slots via Time Division Multiple Access (TDMA). The design of an optimal user grouping strategy is known to be an NP-complete problem which can be solved only through exhaustive search. In this thesis, we provide a graph-based user scheduling and feed space beamforming architecture for the downlink with the aim of reducing user inter-beam interference. The main idea is based on clustering users whose pairwise great-circle distance is as large as possible. First, we create a graph where the users represent the vertices, whereas an edge in the graph between 2 users exists if their great-circle distance is above a certain threshold. In the second step, we develop a low complex greedy user clustering technique and we iteratively search for the maximum clique in the graph, i.e., the largest fully connected subgraph in the graph. Finally, by using the 3 aforementioned power normalization techniques, a Minimum Mean Square Error (MMSE) beamforming matrix is deployed on a cluster basis. The suggested scheduling system is compared with a position-based scheduler, which generates a beam lattice on the ground and randomly selects one user per beam to form a cluster.
Resumo:
Questo studio si pone come obiettivo lo sviluppo e la sperimentazione di un metodo per eseguire un benchmarking di due diversi sistemi di Additive Manufacturing mediante macchina di misura a coordinate Renishaw Cyclone. In particolare sono valutate le prestazioni in termini di precisione di forma di un sistema di tipo FDM e di uno di tipo PolyJet al fine di ottenere dati indicanti le potenzialità di queste due tecnologie per parti di piccole dimensioni. Dopo un’introduzione generale sull’Additive Manufacturing, si scende nei dettagli delle due tecniche oggetto dello studio e si discute di come strutturare il piano sperimentale in funzione degli obiettivi dell’attività e dei metodi scelti per l’acquisizione e la valutazione dei dati. Si parte, infatti, con la fabbricazione di un modello di benchmark, le cui geometrie vengono poi rilevate tramite una macchina di misura a coordinate per ottenere i valori di precisione di forma, che sono presentati come tolleranze geometriche del sistema GD&T. Successivamente, si descrivono tutte le fasi dell’attività sperimentale, iniziando con l’ideazione del modello di benchmark e proseguendo con i processi di fabbricazione e misurazione, per poi arrivare alla deduzione dei valori di precisione di forma tramite un post-processing dei dati. Infine, si presentano i valori di tolleranza ottenuti e si traggono le conclusioni riguardo la riuscita dell’attività sperimentale e il confronto tra le due tecnologie di Additive Manufacturing.
Resumo:
La presenti tesi ha come obiettivo lo studio di due algoritmi per il rilevamento di anomalie all' interno di grafi random. Per entrambi gli algoritmi sono stati creati dei modelli generativi di grafi dinamici in modo da eseguire dei test sintetici. La tesi si compone in una parte iniziale teorica e di una seconda parte sperimentale. Il secondo capitolo introduce la teoria dei grafi. Il terzo capitolo presenta il problema del rilevamento di comunità. Il quarto capitolo introduce possibili definizioni del concetto di anomalie dinamiche e il problema del loro rilevamento. Il quinto capitolo propone l' introduzione di un punteggio di outlierness associato ad ogni nodo sulla base del confronto tra la sua dinamica e quella della comunità a cui appartiene. L' ultimo capitolo si incentra sul problema della ricerca di una descrizione della rete in termini di gruppi o ruoli sulla base della quale incentrare la ricerca delle anomalie dinamiche.
Resumo:
Semantic Web technologies provide the means to express the knowledge in a formal and standardized manner, enabling machines to automatically derive meaning from the data. Often this knowledge is uncertain or different degrees of certainty may be assigned to the same statements. This is the case in many fields of study such as in Digital Humanities, Science and Arts. The challenge relies on the fact that our knowledge about the surrounding world is dynamic and may evolve based on new data coming from the latest discoveries. Furthermore we should be able to express conflicting, debated or disputed statements in an efficient, effective and consistent way without the need of asserting them. We call this approach 'Expressing Without Asserting' (EWA). In this work we identify all existing methods that are compatible with actual Semantic Web standards and enable us to express EWA. In our research we were able to prove that existing reification methods such as Named Graphs, Singleton Properties, Wikidata Statements and RDF-Star are the most suitable methods to represent in a reliable way EWA. Next we compare these methods with our own method, namely Conjectures from a quantitative perspective. Our main objective was to put Conjectures into stress tests leveraging enormous datasets created ad hoc using art-related Wikidata dumps and measure the performance in various triplestores in relation with similar concurrent methods. Our experiments show that Conjectures are a formidable tool to express efficiently and effectively EWA. In some cases, Conjectures outperform state of the art methods such as singleton and Rdf-Star exposing their great potential. Is our firm belief that Conjectures represent a suitable solution to EWA issues. Conjectures in their weak form are fully compatible with Semantic Web standards, especially with RDF and SPARQL. Furthermore Conjectures benefit from comprehensive syntax and intuitive semantics that make them easy to learn and adapt.
Resumo:
To assess quality of care of women with severe maternal morbidity and to identify associated factors. This is a national multicenter cross-sectional study performing surveillance for severe maternal morbidity, using the World Health Organization criteria. The expected number of maternal deaths was calculated with the maternal severity index (MSI) based on the severity of complication, and the standardized mortality ratio (SMR) for each center was estimated. Analyses on the adequacy of care were performed. 17 hospitals were classified as providing adequate and 10 as nonadequate care. Besides almost twofold increase in maternal mortality ratio, the main factors associated with nonadequate performance were geographic difficulty in accessing health services (P < 0.001), delays related to quality of medical care (P = 0.012), absence of blood derivatives (P = 0.013), difficulties of communication between health services (P = 0.004), and any delay during the whole process (P = 0.039). This is an example of how evaluation of the performance of health services is possible, using a benchmarking tool specific to Obstetrics. In this study the MSI was a useful tool for identifying differences in maternal mortality ratios and factors associated with nonadequate performance of care.
Resumo:
PURPOSE: To compare the 2% ibopamine provocative test with the water drinking test as a provocative test for glaucoma. METHODS: Primary open-angle glaucoma patients and normal individuals were selected from CEROF-Universidade Federal de Goiânia UFG, and underwent the 2% ibopamine provocative test and the water drinking test in a randomized fashion, at least 1 week apart. Intraocular pressure (IOP) before and after both tests, Bland-Altman graph, sensitivity and specificity (as mesured by ROC curves) were obtained for both methods. RESULTS: Forty-seven eyes from 25 patients were included (27 eyes from 15 glaucoma patients and 20 eyes from 10 normal individuals), with a mean age of 54.2 ± 12.7 years. The mean MD of glaucoma patients was -2.8 ± 2.11 dB. There was no statistically difference in the baseline IOP (p=0.8) comparing glaucoma patients, but positive after the provocative tests (p=0.03), and in the IOP variation (4.4 ± 1.3 mmHg for ibopamine and 3.2 ± 2.2 mmHg for water drinking test, p=0.01). There was no difference in all studied parameters for normal individuals. The Bland-Altman graph showed high dispersion comparing both methods. The areas under the ROC curve were 0.987 for the ibopamine provocative test, and 0.807 for the water-drinking test. CONCLUSION: In this selected subgroup of glaucoma patients with early visual field defect, the ibopamine provocative test has shown better sensitivity/specificity than the water drinking test.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
The Lattes platform is the major scientific information system maintained by the National Council for Scientific and Technological Development (CNPq). This platform allows to manage the curricular information of researchers and institutions working in Brazil based on the so called Lattes Curriculum. However, the public information is individually available for each researcher, not providing the automatic creation of reports of several scientific productions for research groups. It is thus difficult to extract and to summarize useful knowledge for medium to large size groups of researchers. This paper describes the design, implementation and experiences with scriptLattes: an open-source system to create academic reports of groups based on curricula of the Lattes Database. The scriptLattes system is composed by the following modules: (a) data selection, (b) data preprocessing, (c) redundancy treatment, (d) collaboration graph generation among group members, (e) research map generation based on geographical information, and (f) automatic report creation of bibliographical, technical and artistic production, and academic supervisions. The system has been extensively tested for a large variety of research groups of Brazilian institutions, and the generated reports have shown an alternative to easily extract knowledge from data in the context of Lattes platform. The source code, usage instructions and examples are available at http://scriptlattes.sourceforge.net/.
Resumo:
OBJETIVO: Analisar a acurácia do diagnóstico de dois protocolos de imunofluorescência indireta para leishmaniose visceral canina. MÉTODOS: Cães provenientes de inquérito soroepidemiológico realizado em área endêmica nos municípios de Araçatuba e de Andradina, na região noroeste do estado de São Paulo, em 2003, e área não endêmica da região metropolitana de São Paulo, foram utilizados para avaliar comparativamente dois protocolos da reação de imunofluorescência indireta (RIFI) para leishmaniose: um utilizando antígeno heterólogo Leishmania major (RIFI-BM) e outro utilizando antígeno homólogo Leishmania chagasi (RIFI-CH). Para estimar acurácia utilizou-se a análise two-graph receiver operating characteristic (TG-ROC). A análise TG-ROC comparou as leituras da diluição 1:20 do antígeno homólogo (RIFI-CH), consideradas como teste referência, com as diluições da RIFI-BM (antígeno heterólogo). RESULTADOS: A diluição 1:20 do teste RIFI-CH apresentou o melhor coeficiente de contingência (0,755) e a maior força de associação entre as duas variáveis estudadas (qui-quadrado=124,3), sendo considerada a diluição-referência do teste nas comparações com as diferentes diluições do teste RIFI-BM. Os melhores resultados do RIFI-BM foram obtidos na diluição 1:40, com melhor coeficiente de contingência (0,680) e maior força de associação (qui-quadrado=80,8). Com a mudança do ponto de corte sugerido nesta análise para a diluição 1:40 da RIFI-BM, o valor do parâmetro especificidade aumentou de 57,5% para 97,7%, embora a diluição 1:80 tivesse apresentado a melhor estimativa para sensibilidade (80,2%) com o novo ponto de corte. CONCLUSÕES: A análise TG-ROC pode fornecer importantes informações sobre os testes de diagnósticos, além de apresentar sugestões sobre pontos de cortes que podem melhorar as estimativas de sensibilidade e especificidade do teste, e avaliá-los a luz do melhor custo-benefício.
Resumo:
Background: Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings: In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion: The present results support these claims and the neural efficiency hypothesis.