10 resultados para non-trivial data structures
em Universidade do Minho
Resumo:
Dissertação de mestrado em Engenharia Informática
Resumo:
Abstract Dataflow programs are widely used. Each program is a directed graph where nodes are computations and edges indicate the flow of data. In prior work, we reverse-engineered legacy dataflow programs by deriving their optimized implementations from a simple specification graph using graph transformations called refinements and optimizations. In MDE-speak, our derivations were PIM-to-PSM mappings. In this paper, we show how extensions complement refinements, optimizations, and PIM-to-PSM derivations to make the process of reverse engineering complex legacy dataflow programs tractable. We explain how optional functionality in transformations can be encoded, thereby enabling us to encode product lines of transformations as well as product lines of dataflow programs. We describe the implementation of extensions in the ReFlO tool and present two non-trivial case studies as evidence of our work’s generality
Resumo:
Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.
Resumo:
Tese de Doutoramento em Engenharia Civil.
Resumo:
The MAP-i doctoral program of the Universities of Minho, Aveiro and Porto
Resumo:
The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef
Resumo:
This paper presents a methodology based on the Bayesian data fusion techniques applied to non-destructive and destructive tests for the structural assessment of historical constructions. The aim of the methodology is to reduce the uncertainties of the parameter estimation. The Young's modulus of granite stones was chosen as an example for the present paper. The methodology considers several levels of uncertainty since the parameters of interest are considered random variables with random moments. A new concept of Trust Factor was introduced to affect the uncertainty related to each test results, translated by their standard deviation, depending on the higher or lower reliability of each test to predict a certain parameter.
Resumo:
Doctoral Thesis Civil Engineering
Resumo:
For any vacuum initial data set, we define a local, non-negative scalar quantity which vanishes at every point of the data hypersurface if and only if the data are Kerr initial data. Our scalar quantity only depends on the quantities used to construct the vacuum initial data set which are the Riemannian metric defined on the initial data hypersurface and a symmetric tensor which plays the role of the second fundamental form of the embedded initial data hypersurface. The dependency is algorithmic in the sense that given the initial data one can compute the scalar quantity by algebraic and differential manipulations, being thus suitable for an implementation in a numerical code. The scalar could also be useful in studies of the non-linear stability of the Kerr solution because it serves to measure the deviation of a vacuum initial data set from the Kerr initial data in a local and algorithmic way.
Resumo:
In this work, hafnium aluminum oxide (HfAlO) thin films were deposited by ion beam sputtering deposition technique on Si substrate. The presence of oxygen vacancies in the HfAlOx layer deposited in oxygen deficient environment is evidenced from the photoluminescence spectra. Furthermore, HfAlO(oxygen rich)/HfAlOx(oxygen poor) bilayer structures exhibit multilevel resistive switching (RS), and the switching ratio becomes more prominent with increasing the HfAlO layer thickness. The bilayer structure with HfAlO/HfAlOx thickness of 30/40 nm displays the enhanced multilevel resistive switching characteristics, where the high resistance state/ intermediate resistance state (IRS) and IRS/low resistance state resistance ratios are 102 and 5 105 , respectively. The switching mechanisms in the bilayer structures were investigated by the temperature dependence of the three resistance states. This study revealed that the multilevel RS is attributed to the coupling of ionic conduction and the metallic conduction, being the first associated to the formation and rupture of conductive filaments related to oxygen vacancies and the second with the formation of a metallic filament. Moreover, the bilayer structures exhibit good endurance and stability in time.