939 resultados para synchronization protocols
Resumo:
An extended formulation of a polyhedron P is a linear description of a polyhedron Q together with a linear map π such that π(Q)=P. These objects are of fundamental importance in polyhedral combinatorics and optimization theory, and the subject of a number of studies. Yannakakis’ factorization theorem (Yannakakis in J Comput Syst Sci 43(3):441–466, 1991) provides a surprising connection between extended formulations and communication complexity, showing that the smallest size of an extended formulation of $$P$$P equals the nonnegative rank of its slack matrix S. Moreover, Yannakakis also shows that the nonnegative rank of S is at most 2c, where c is the complexity of any deterministic protocol computing S. In this paper, we show that the latter result can be strengthened when we allow protocols to be randomized. In particular, we prove that the base-2 logarithm of the nonnegative rank of any nonnegative matrix equals the minimum complexity of a randomized communication protocol computing the matrix in expectation. Using Yannakakis’ factorization theorem, this implies that the base-2 logarithm of the smallest size of an extended formulation of a polytope P equals the minimum complexity of a randomized communication protocol computing the slack matrix of P in expectation. We show that allowing randomization in the protocol can be crucial for obtaining small extended formulations. Specifically, we prove that for the spanning tree and perfect matching polytopes, small variance in the protocol forces large size in the extended formulation.
Resumo:
Transmitting sensitive data over non-secret channels has always required encryption technologies to ensure that the data arrives without exposure to eavesdroppers. The Internet has made it possible to transmit vast volumes of data more rapidly and cheaply and to a wider audience than ever before. At the same time, strong encryption makes it possible to send data securely, to digitally sign it, to prove it was sent or received, and to guarantee its integrity. The Internet and encryption make bulk transmission of data a commercially viable proposition. However, there are implementation challenges to solve before commercial bulk transmission becomes mainstream. Powerful have a performance cost, and may affect quality of service. Without encryption, intercepted data may be illicitly duplicated and re-sold, or its commercial value diminished because its secrecy is lost. Performance degradation and potential for commercial loss discourage the bulk transmission of data over the Internet in any commercial application. This paper outlines technical solutions to these problems. We develop new technologies and combine existing ones in new and powerful ways to minimise commercial loss without compromising performance or inflating overheads.
Resumo:
Secure transmission of bulk data is of interest to many content providers. A commercially-viable distribution of content requires technology to prevent unauthorised access. Encryption tools are powerful, but have a performance cost. Without encryption, intercepted data may be illicitly duplicated and re-sold, or its commercial value diminished because its secrecy is lost. Two technical solutions make it possible to perform bulk transmissions while retaining security without too high a performance overhead. These are: 1. a) hierarchical encryption - the stronger the encryption, the harder it is to break but also the more computationally expensive it is. A hierarchical approach to key exchange means that simple and relatively weak encryption and keys are used to encrypt small chunks of data, for example 10 seconds of video. Each chunk has its own key. New keys for this bottom-level encryption are exchanged using a slightly stronger encryption, for example a whole-video key could govern the exchange of the 10-second chunk keys. At a higher level again, there could be daily or weekly keys, securing the exchange of whole-video keys, and at a yet higher level, a subscriber key could govern the exchange of weekly keys. At higher levels, the encryption becomes stronger but is used less frequently, so that the overall computational cost is minimal. The main observation is that the value of each encrypted item determines the strength of the key used to secure it. 2. b) non-symbolic fragmentation with signal diversity - communications are usually assumed to be sent over a single communications medium, and the data to have been encrypted and/or partitioned in whole-symbol packets. Network and path diversity break up a file or data stream into fragments which are then sent over many different channels, either in the same network or different networks. For example, a message could be transmitted partly over the phone network and partly via satellite. While TCP/IP does a similar thing in sending different packets over different paths, this is done for load-balancing purposes and is invisible to the end application. Network and path diversity deliberately introduce the same principle as a secure communications mechanism - an eavesdropper would need to intercept not just one transmission path but all paths used. Non-symbolic fragmentation of data is also introduced to further confuse any intercepted stream of data. This involves breaking up data into bit strings which are subsequently disordered prior to transmission. Even if all transmissions were intercepted, the cryptanalyst still needs to determine fragment boundaries and correctly order them. These two solutions depart from the usual idea of data encryption. Hierarchical encryption is an extension of the combined encryption of systems such as PGP but with the distinction that the strength of encryption at each level is determined by the "value" of the data being transmitted. Non- symbolic fragmentation suppresses or destroys bit patterns in the transmitted data in what is essentially a bit-level transposition cipher but with unpredictable irregularly-sized fragments. Both technologies have applications outside the commercial and can be used in conjunction with other forms of encryption, being functionally orthogonal.
Resumo:
A utilização de sistemas embutidos distribuídos em diversas áreas como a robótica, automação industrial e aviónica tem vindo a generalizar-se no decorrer dos últimos anos. Este tipo de sistemas são compostos por vários nós, geralmente designados por sistemas embutidos. Estes nós encontram-se interligados através de uma infra-estrutura de comunicação de forma a possibilitar a troca de informação entre eles de maneira a concretizar um objetivo comum. Por norma os sistemas embutidos distribuídos apresentam requisitos temporais bastante exigentes. A tecnologia Ethernet e os protocolos de comunicação, com propriedades de tempo real, desenvolvidos para esta não conseguem associar de uma forma eficaz os requisitos temporais das aplicações de tempo real aos requisitos Quality of Service (QoS) dos diferentes tipos de tráfego. O switch Hard Real-Time Ethernet Switching (HaRTES) foi desenvolvido e implementado com o objetivo de solucionar estes problemas devido às suas capacidades como a sincronização de fluxos diferentes e gestão de diferentes tipos de tráfego. Esta dissertação apresenta a adaptação de um sistemas físico de modo a possibilitar a demonstração do correto funcionamento do sistema de comunicação, que será desenvolvido e implementado, utilizando um switch HaRTES como o elemento responsável pela troca de informação na rede entre os nós. O desempenho da arquitetura de rede desenvolvida será também testada e avaliada.
Resumo:
A rapid rate and high percentage of macadamia nut germination, together with production of vigorous seedlings, are required by nurseries and breeding programs. Germination of nuts is typically protracted, however, and rarely reaches 100%. Many studies have been conducted into macadamia germination, but most have assessed percent germination only. This study investigated the effects of various treatments on percent germination, germination rate, and plant, shoot and root dry weights. The treatments tested were combinations of: (i) soaking or not soaking seeds in a dilute fungicide solution prior to planting; (ii) four different planting media; and (iii) leaving seed trays open or placing them inside clear plastic bags. For freshly harvested nuts, sowing in potting mix under clear plastic and without soaking produced the highest percent germination and germination rate, the largest shoots, and longest lateral roots.
Resumo:
The synchronization of oscillatory activity in networks of neural networks is usually implemented through coupling the state variables describing neuronal dynamics. In this study we discuss another but complementary mechanism based on a learning process with memory. A driver network motif, acting as a teacher, exhibits winner-less competition (WLC) dynamics, while a driven motif, a learner, tunes its internal couplings according to the oscillations observed in the teacher. We show that under appropriate training the learner motif can dynamically copy the coupling pattern of the teacher and thus synchronize oscillations with the teacher. Then, we demonstrate that the replication of the WLC dynamics occurs for intermediate memory lengths only. In a unidirectional chain of N motifs coupled through teacher-learner paradigm the time interval required for pattern replication grows linearly with the chain size, hence the learning process does not blow up and at the end we observe phase synchronized oscillations along the chain. We also show that in a learning chain closed into a ring the network motifs come to a consensus, i.e. to a state with the same connectivity pattern corresponding to the mean initial pattern averaged over all network motifs.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
To exploit the full potential of radio measurements of cosmic-ray air showers at MHz frequencies, a detector timing synchronization within 1 ns is needed. Large distributed radio detector arrays such as the Auger Engineering Radio Array (AERA) rely on timing via the Global Positioning System (GPS) for the synchronization of individual detector station clocks. Unfortunately, GPS timing is expected to have an accuracy no better than about 5 ns. In practice, in particular in AERA, the GPS clocks exhibit drifts on the order of tens of ns. We developed a technique to correct for the GPS drifts, and an independent method is used to cross-check that indeed we reach a nanosecond-scale timing accuracy by this correction. First, we operate a "beacon transmitter" which emits defined sine waves detected by AERA antennas recorded within the physics data. The relative phasing of these sine waves can be used to correct for GPS clock drifts. In addition to this, we observe radio pulses emitted by commercial airplanes, the position of which we determine in real time from Automatic Dependent Surveillance Broadcasts intercepted with a software-defined radio. From the known source location and the measured arrival times of the pulses we determine relative timing offsets between radio detector stations. We demonstrate with a combined analysis that the two methods give a consistent timing calibration with an accuracy of 2 ns or better. Consequently, the beacon method alone can be used in the future to continuously determine and correct for GPS clock drifts in each individual event measured by AERA.