5 resultados para log server normativa garante privacy
em Universidade do Minho
Resumo:
This paper presents the findings of an experimental campaign that was conducted to investigate the seismic behaviour of log houses. A two-storey log house designed by the Portuguese company Rusticasa® was subjected to a series of shaking table tests at LNEC, Lisbon, Portugal. The paper contains the description of the geometry and construction of the house and all the aspects related to the testing procedure, namely the pre-design, the setup, instrumentation and the testing process itself. The shaking table tests were carried out with a scaled spectrum of the Montenegro (1979) earthquake, at increasing levels of PGA, starting from 0.07g, moving on to 0.28g and finally 0.5g. The log house did not suffer any major damage and remained in working condition throughout the entire process. The preliminary analysis of the overall behaviour of the log house is also discussed.
Resumo:
The dearth of knowledge on the load resistance mechanisms of log houses and the need for developing numerical models that are capable of simulating the actual behaviour of these structures has pushed efforts to research the relatively unexplored aspects of log house construction. The aim of the research that is presented in this paper is to build a working model of a log house that will contribute toward understanding the behaviour of these structures under seismic loading. The paper presents the results of a series of shaking table tests conducted on a log house and goes on to develop a numerical model of the tested house. The finite element model has been created in SAP2000 and validated against the experimental results. The modelling assumptions and the difficulties involved in the process have been described and, finally, a discussion on the effects of the variation of different physical and material parameters on the results yielded by the model has been drawn up.
Resumo:
The Childhood protection is a subject with high value for the society, but, the Child Abuse cases are difficult to identify. The process from suspicious to accusation is very difficult to achieve. It must configure very strong evidences. Typically, Health Care services deal with these cases from the beginning where there are evidences based on the diagnosis, but they aren’t enough to promote the accusation. Besides that, this subject it’s highly sensitive because there are legal aspects to deal with such as: the patient privacy, paternity issues, medical confidentiality, among others. We propose a Child Abuses critical knowledge monitor system model that addresses this problem. This decision support system is implemented with a multiple scientific domains: to capture of tokens from clinical documents from multiple sources; a topic model approach to identify the topics of the documents; knowledge management through the use of ontologies to support the critical knowledge sensibility concepts and relations such as: symptoms, behaviors, among other evidences in order to match with the topics inferred from the clinical documents and then alert and log when clinical evidences are present. Based on these alerts clinical personnel could analyze the situation and take the appropriate procedures.
Resumo:
Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.
Resumo:
We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., the third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25× improvement in proof-computation time and a 20× reduction in prover storage space.