3 resultados para Secure Authentication for Broadcast (DNP3-SAB)
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Gossip protocols have proved to be a viable solution to set-up and manage largescale P2P services or applications in a fully decentralised scenario. The gossip or epidemic communication scheme is heavily based on stochastic behaviors and it is the fundamental idea behind many large-scale P2P protocols. It provides many remarkable features, such as scalability, robustness to failures, emergent load balancing capabilities, fast spreading, and redundancy of information. In some sense, these services or protocols mimic natural system behaviors in order to achieve their goals. The key idea of this work is that the remarkable properties of gossip hold when all the participants follow the rules dictated by the actual protocols. If one or more malicious nodes join the network and start cheating according to some strategy, the result can be catastrophic. In order to study how serious the threat posed by malicious nodes can be and what can be done to prevent attackers from cheating, we focused on a general attack model aimed to defeat a key service in gossip overlay networks (the Peer Sampling Service [JGKvS04]). We also focused on the problem of protecting against forged information exchanged in gossip services. We propose a solution technique for each problem; both techniques are general enough to be applied to distinct service implementations. As gossip protocols, our solutions are based on stochastic behavior and are fully decentralized. In addition, each technique’s behaviour is abstracted by a general primitive function extending the basic gossip scheme; this approach allows the adoptions of our solutions with minimal changes in different scenarios. We provide an extensive experimental evaluation to support the effectiveness of our techniques. Basically, these techniques aim to be building blocks or P2P architecture guidelines in building more resilient and more secure P2P services.
Resumo:
The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager.
Resumo:
The ever-growing interest in scientific techniques, able to characterise the materials and rediscover the steps behind the execution of a painting, makes them widely accepted in its investigation. This research discusses issues emerging from attribution and authentication studies and proposes best practise for the characterisation of materials and techniques, favouring the contextualisation of the results in an integrated approach; the work aims to systematically classify paintings in categories that aid the examination of objects. A first grouping of paintings is based on the information initially available on them, identifying four categories. A focus of this study is the examination of case studies, spanning from the 16th to the 20th century, to evaluate and validate different protocols associated to each category, to show problems arising from paintings and explain advantages and limits of the approach. The research methodology incorporates a combined set of scientific techniques (non-invasive, such as technical imaging and XRF, micro-invasive, such as optical microscopy, SEM-EDS, FTIR, Raman microscopy and in one case radiocarbon dating) to answer the questions and, if necessary for the classification, exhaustively characterise the materials of the paintings, as the creation and contribution of shared technical databases related to various artists and their evolution over time is an objective tool that benefits this kind of study. The reliability of a close collaboration among different professionals is an essential aspect of this research to comprehensively study a painting, as the integration of stylistic, documentary and provenance studies corroborates the scientific findings and helps in the successful contextualisation of the results and the reconstruction of the history of the object.