924 resultados para distributed amorphous human intelligence genesis robust communication network
Resumo:
Abstract Background An estimated 10–20 million individuals are infected with the retrovirus human T-cell leukemia virus type 1 (HTLV-1). While the majority of these individuals remain asymptomatic, 0.3-4% develop a neurodegenerative inflammatory disease, termed HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP). HAM/TSP results in the progressive demyelination of the central nervous system and is a differential diagnosis of multiple sclerosis (MS). The etiology of HAM/TSP is unclear, but evidence points to a role for CNS-inflitrating T-cells in pathogenesis. Recently, the HTLV-1-Tax protein has been shown to induce transcription of the human endogenous retrovirus (HERV) families W, H and K. Intriguingly, numerous studies have implicated these same HERV families in MS, though this association remains controversial. Results Here, we explore the hypothesis that HTLV-1-infection results in the induction of HERV antigen expression and the elicitation of HERV-specific T-cells responses which, in turn, may be reactive against neurons and other tissues. PBMC from 15 HTLV-1-infected subjects, 5 of whom presented with HAM/TSP, were comprehensively screened for T-cell responses to overlapping peptides spanning HERV-K(HML-2) Gag and Env. In addition, we screened for responses to peptides derived from diverse HERV families, selected based on predicted binding to predicted optimal epitopes. We observed a lack of responses to each of these peptide sets. Conclusions Thus, although the limited scope of our screening prevents us from conclusively disproving our hypothesis, the current study does not provide data supporting a role for HERV-specific T-cell responses in HTLV-1 associated immunopathology.
Resumo:
Agent Communication Languages (ACLs) have been developed to provide a way for agents to communicate with each other supporting cooperation in Multi-Agent Systems. In the past few years many ACLs have been proposed for Multi-Agent Systems, such as KQML and FIPA-ACL. The goal of these languages is to support high-level, human like communication among agents, exploiting Knowledge Level features rather than symbol level ones. Adopting these ACLs, and mainly the FIPA-ACL specifications, many agent platforms and prototypes have been developed. Despite these efforts, an important issue in the research on ACLs is still open and concerns how these languages should deal (at the Knowledge Level) with possible failures of agents. Indeed, the notion of Knowledge Level cannot be straightforwardly extended to a distributed framework such as MASs, because problems concerning communication and concurrency may arise when several Knowledge Level agents interact (for example deadlock or starvation). The main contribution of this Thesis is the design and the implementation of NOWHERE, a platform to support Knowledge Level Agents on the Web. NOWHERE exploits an advanced Agent Communication Language, FT-ACL, which provides high-level fault-tolerant communication primitives and satisfies a set of well defined Knowledge Level programming requirements. NOWHERE is well integrated with current technologies, for example providing full integration for Web services. Supporting different middleware used to send messages, it can be adapted to various scenarios. In this Thesis we present the design and the implementation of the architecture, together with a discussion of the most interesting details and a comparison with other emerging agent platforms. We also present several case studies where we discuss the benefits of programming agents using the NOWHERE architecture, comparing the results with other solutions. Finally, the complete source code of the basic examples can be found in appendix.
Resumo:
The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
Questa dissertazione esamina le sfide e i limiti che gli algoritmi di analisi di grafi incontrano in architetture distribuite costituite da personal computer. In particolare, analizza il comportamento dell'algoritmo del PageRank così come implementato in una popolare libreria C++ di analisi di grafi distribuiti, la Parallel Boost Graph Library (Parallel BGL). I risultati qui presentati mostrano che il modello di programmazione parallela Bulk Synchronous Parallel è inadatto all'implementazione efficiente del PageRank su cluster costituiti da personal computer. L'implementazione analizzata ha infatti evidenziato una scalabilità negativa, il tempo di esecuzione dell'algoritmo aumenta linearmente in funzione del numero di processori. Questi risultati sono stati ottenuti lanciando l'algoritmo del PageRank della Parallel BGL su un cluster di 43 PC dual-core con 2GB di RAM l'uno, usando diversi grafi scelti in modo da facilitare l'identificazione delle variabili che influenzano la scalabilità. Grafi rappresentanti modelli diversi hanno dato risultati differenti, mostrando che c'è una relazione tra il coefficiente di clustering e l'inclinazione della retta che rappresenta il tempo in funzione del numero di processori. Ad esempio, i grafi Erdős–Rényi, aventi un basso coefficiente di clustering, hanno rappresentato il caso peggiore nei test del PageRank, mentre i grafi Small-World, aventi un alto coefficiente di clustering, hanno rappresentato il caso migliore. Anche le dimensioni del grafo hanno mostrato un'influenza sul tempo di esecuzione particolarmente interessante. Infatti, si è mostrato che la relazione tra il numero di nodi e il numero di archi determina il tempo totale.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
It is currently widely accepted that the understanding of complex cell functions depends on an integrated network theoretical approach and not on an isolated view of the different molecular agents. Aim of this thesis was the examination of topological properties that mirror known biological aspects by depicting the human protein network with methods from graph- and network theory. The presented network is a partial human interactome of 9222 proteins and 36324 interactions, consisting of single interactions reliably extracted from peer-reviewed scientific publications. In general, one can focus on intra- or intermodular characteristics, where a functional module is defined as "a discrete entity whose function is separable from those of other modules". It is found that the presented human network is also scale-free and hierarchically organised, as shown for yeast networks before. The interactome also exhibits proteins with high betweenness and low connectivity which are biologically analyzed and interpreted here as shuttling proteins between organelles (e.g. ER to Golgi, internal ER protein translocation, peroxisomal import, nuclear pores import/export) for the first time. As an optimisation for finding proteins that connect modules, a new method is developed here based on proteins located between highly clustered regions, rather than regarding highly connected regions. As a proof of principle, the Mediator complex is found in first place, the prime example for a connector complex. Focusing on intramodular aspects, the measurement of k-clique communities discriminates overlapping modules very well. Twenty of the largest identified modules are analysed in detail and annotated to known biological structures (e.g. proteasome, the NFκB-, TGF-β complex). Additionally, two large and highly interconnected modules for signal transducer and transcription factor proteins are revealed, separated by known shuttling proteins. These proteins yield also the highest number of redundant shortcuts (by calculating the skeleton), exhibit the highest numbers of interactions and might constitute highly interconnected but spatially separated rich-clubs either for signal transduction or for transcription factors. This design principle allows manifold regulatory events for signal transduction and enables a high diversity of transcription events in the nucleus by a limited set of proteins. Altogether, biological aspects are mirrored by pure topological features, leading to a new view and to new methods that assist the annotation of proteins to biological functions, structures and subcellular localisations. As the human protein network is one of the most complex networks at all, these results will be fruitful for other fields of network theory and will help understanding complex network functions in general.
Resumo:
n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.
Resumo:
The question addressed by this dissertation is how the human brain builds a coherent representation of the body, and how this representation is used to recognize its own body. Recent approaches by neuroimaging and TMS revealed hints for a distinct brain representation of human body, as compared with other stimulus categories. Neuropsychological studies demonstrated that body-parts and self body-parts recognition are separate processes sub-served by two different, even if possibly overlapping, networks within the brain. Bodily self-recognition is one aspect of our ability to distinguish between self and others and the self/other distinction is a crucial aspect of social behaviour. This is the reason why I have conducted a series of experiment on subjects with everyday difficulties in social and emotional behaviour, such as patients with autism spectrum disorders (ASD) and patients with Parkinson’s disease (PD). More specifically, I studied the implicit self body/face recognition (Chapter 6) and the influence of emotional body postures on bodily self-processing in TD children as well as in ASD children (Chapter 7). I found that the bodily self-recognition is present in TD and in ASD children and that emotional body postures modulate self and others’ body processing. Subsequently, I compared implicit and explicit bodily self-recognition in a neuro-degenerative pathology, such as in PD patients, and I found a selective deficit in implicit but not in explicit self-recognition (Chapter 8). This finding suggests that implicit and explicit bodily self-recognition are separate processes subtended by different mechanisms that can be selectively impaired. If the bodily self is crucial for self/other distinction, the space around the body (personal space) represents the space of interaction and communication with others. When, I studied this space in autism, I found that personal space regulation is impaired in ASD children (Chapter 9).
Resumo:
In dieser Arbeit wurden zytotoxische Effekte sowie die inflammatorische Reaktionen des distalen respiratorischen Traktes nach Nanopartikelexposition untersucht. Besondere Aufmerksamkeit lag auch auf der Untersuchung unterschiedlicher zellulärer Aufnahmewege von Nanopartikeln wie z.B. Clathrin- oder Caveolae-vermittelte Endozytose oder auch Clathrin- und Caveolae-unabhängige Endozytose (mit möglicher Beteiligung von Flotillinen). Drei unterschiedliche Nanopartikel wurden hierbei gewählt: amorphes Silica (aSNP), Organosiloxan (AmorSil) und Poly(ethyleneimin) (PEI). Alle unterschiedlichen Materialien gewinnen zunehmend an Interesse für biomedizinische Forschungsrichtungen (drug and gene delivery). Insbesondere finden aSNPs auch in der Industrie vermehrt Anwendung, und stellen somit ein ernstzunehmendes Gesundheitsrisiko dar. Dieser wird dadurch zu einem begehrten Angriffsziel für pharmazeutische Verabreichungen von Medikamenten über Nanopartikel als Vehikel aber bietet zugleich auch eine Angriffsfläche für gesundheitsschädliche Nanomaterialien. Aus diesem Grund sollten die gesundheitsschädigenden Risiken, sowie das Schicksal von zellulär aufgenommenen NPs sorgfältig untersucht werden. In vivo Studien an der alveolaren-kapillaren Barriere sind recht umständlich. Aus diesem Grund wurde in dieser Arbeit ein Kokulturmodel benutzt, dass die Alveolar-Kapillare Barrier in vivo nachstellt. Das Model besteht aus dem humanen Lungenepithelzelltyp (z.B. NCI H441) und einem humanen microvasculären Endothelzelltyp (z.B. ISO-HAS-1), die auf entgegengesetzten Seiten eines Transwell-Filters ausgesät werden und eine dichte Barriere ausbilden. Die NP Interaktion mit Zellen in Kokultur wurde mit denen in konventioneller Monokultur verglichen, in der Zellen 24h vor dem Experiment ausgesät werden. Diese Studie zeigt, dass nicht nur die polarisierte Eigenschaft der Zellen in Kokultur sondern auch die unmittelbare Nähe von Epithel und Endothelzelle ausschlaggebend für durch aSNPs verursachte Effekte ist. Im Hinblick auf inflammatorische Marker (sICAM, IL-6, IL8-Ausschüttung), reagiert die Kokultur auf aSNPs empfindlicher als die konventionelle Monokultur, wohingegen die Epithelzellen in der Kokultur auf zytotoxikologischer Ebene (LDH-Ausschüttung) unempfindlicher auf aSNPs reagierten als die Zellen in Monokultur. Aufnahmestudien haben gezeigt, dass die Epithelzellen in Kokultur entschieden weniger NPs aufnehmen. Somit zeigen die H441 in der Kokultur ähnliche epitheliale Eigenschaften einer schützenden Barriere, wie sie auch in vivo zu finden sind. Obwohl eine ausreichende Aufnahme von NPs in H441 in Kokultur erreicht werden konnte, konnte ein Transport von NPs durch die epitheliale Schicht und eine Aufnahme in die endotheliale Schicht mit den gewählten Inkubationszeiten nicht gezeigt werden. Eine Clathrin- oder Caveolae-vermittelte Endozytose von NPs konnte mittels Immunfluoreszenz weder in der Mono- noch in der Kokultur nachgewiesen werden. Jedoch zeigte sich eine Akkumulation von NPs in Flotillin-1 und-2 enthaltende Vesikel in Epithelzellen aus beiden Kultursystemen. Ergebnisse mit Flotillin-inhibierten (siRNA) Epithelzellen, zeigten eine deutlich geringere Aufnahme von aSNPs. Zudem zeigte sich eine eine reduzierte Viabilität (MTS) von aSNP-behandelten Zellen. Dies deutet auf eine Beteiligung von Flotillinen an unbekannten (Clathrin oder Caveolae -unabhängig) Endozytosemechanismen und (oder) endosomaler Speicherung. Zusammenfassend waren die Aufnahmemechanismen für alle untesuchten NPs in konventioneller Monokultur und Kokultur vergleichbar, obwohl sich die Barriereeigenschaften deutlich unterscheiden. Diese Arbeit zeigt deutlich, dass sich die Zellen in Kokultur anders verhalten. Die Zellen erreichen hierbei einen höheren Differenzierungsgrad und eine Zellkommunikation mit anderen relevanten Zelltypen wird ermöglicht. Durch das Einbringen eines dritten relevanten Zelltyps in die Kokultur, des Alveolarmakrophagen (Zelllinie THP-1), welcher die erste Verteidigungsfront im Alveolus bildet, wird diese Aussage weiter bekräftigt. Erste Versuche haben gezeigt, dass die Triplekultur bezüglich ihrer Barriereeigenschaften und IL-8-Ausschüttung sensitiver auf z.B. TNF- oder LPS-Stimulation reagiert als die Kokultur. Verglichen mit konventionellen Monokulturen imitieren gut ausgebildete, multizelluräre Kokulturmodelle viel präziser das zelluläre Zusammenspiel im Körper. Darum liefern Nanopartikelinteraktionen mit dem in vitro-Triplekulturmodel aufschlussreichere Ergebnisse bezüglich umweltbedingter oder pharmazeutischer NP-Exposition in der distalen Lung als es uns bisher möglich war.
Resumo:
This thesis focuses on the energy efficiency in wireless networks under the transmission and information diffusion points of view. In particular, on one hand, the communication efficiency is investigated, attempting to reduce the consumption during transmissions, while on the other hand the energy efficiency of the procedures required to distribute the information among wireless nodes in complex networks is taken into account. For what concerns energy efficient communications, an innovative transmission scheme reusing source of opportunity signals is introduced. This kind of signals has never been previously studied in literature for communication purposes. The scope is to provide a way for transmitting information with energy consumption close to zero. On the theoretical side, starting from a general communication channel model subject to a limited input amplitude, the theme of low power transmission signals is tackled under the perspective of stating sufficient conditions for the capacity achieving input distribution to be discrete. Finally, the focus is shifted towards the design of energy efficient algorithms for the diffusion of information. In particular, the endeavours are aimed at solving an estimation problem distributed over a wireless sensor network. The proposed solutions are deeply analyzed both to ensure their energy efficiency and to guarantee their robustness against losses during the diffusion of information (against information diffusion truncation more in general).
Resumo:
The cybernetics revolution of the last years improved a lot our lives, having an immediate access to services and a huge amount of information over the Internet. Nowadays the user is increasingly asked to insert his sensitive information on the Internet, leaving its traces everywhere. But there are some categories of people that cannot risk to reveal their identities on the Internet. Even if born to protect U.S. intelligence communications online, nowadays Tor is the most famous low-latency network, that guarantees both anonymity and privacy of its users. The aim of this thesis project is to well understand how the Tor protocol works, not only studying its theory, but also implementing those concepts in practice, having a particular attention for security topics. In order to run a Tor private network, that emulates the real one, a virtual testing environment has been configured. This behavior allows to conduct experiments without putting at risk anonymity and privacy of real users. We used a Tor patch, that stores TLS and circuit keys, to be given as inputs to a Tor dissector for Wireshark, in order to obtain decrypted and decoded traffic. Observing clear traffic allowed us to well check the protocol outline and to have a proof of the format of each cell. Besides, these tools allowed to identify a traffic pattern, used to conduct a traffic correlation attack to passively deanonymize hidden service clients. The attacker, controlling two nodes of the Tor network, is able to link a request for a given hidden server to the client who did it, deanonymizing him. The robustness of the traffic pattern and the statistics, such as the true positive rate, and the false positive rate, of the attack are object of a potential future work.
Resumo:
Among synthetic vaccines, virus-like particles (VLPs) are used for their ability to induce strong humoral responses. Very little is reported on VLP-based-vaccine-induced CD4(+) T-cell responses, despite the requirement of helper T cells for antibody isotype switching. Further knowledge on helper T cells is also needed for optimization of CD8(+) T-cell vaccination. Here, we analysed human CD4(+) T-cell responses to vaccination with MelQbG10, which is a Qβ-VLP covalently linked to a long peptide derived from the melanoma self-antigen Melan-A. In all analysed patients, we found strong antibody responses of mainly IgG1 and IgG3 isotypes, and concomitant Th1-biased CD4(+) T-cell responses specific for Qβ. Although less strong, comparable B- and CD4(+) T-cell responses were also found specific for the Melan-A cargo peptide. Further optimization is required to shift the response more towards the cargo peptide. Nevertheless, the data demonstrate the high potential of VLPs for inducing humoral and cellular immune responses by mounting powerful CD4(+) T-cell help.