880 resultados para broadcast encryption
Resumo:
Unlike the mathematical encryption and decryption adopted in the classical cryptographic technology at the higher protocol layers, it is shown that characteristics intrinsic to the physical layer, such as wireless channel propagation, can be exploited to lock useful information. This information then can be automatically unlocked using real time analog RF means. In this paper retrodirective array, RDA, technology for spatial encryption in the multipath environment is for the first time combined with the directional modulation, DM, method normally associated with free space secure physical layer communications. We show that the RDA can be made to operate more securely by borrowing DM concepts and that the DM enhanced RDA arrangement is suitable for use in a multipath environment.
Resumo:
commissioned by ORF (Austrian Radio) Wien (Heidi Grundmann) for project 'Entree/Sortie' Broadcast 10 January 1991
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
The technological constraints of early British television encouraged drama productions which emphasised the immediate, the enclosed and the close-up, an approach which Jason Jacobs described in the title of his seminal study as 'the intimate screen'. While Jacobs' book showed that this conception of early British television drama was only part of the reality, he did not focus on the role that special effects played in expanding the scope of the early television screen. This article will focus upon this role, showing that special effects were not only of use in expanding the temporal and spatial scope of television, but were also considered to be of interest to the audience as a way of exploring the new medium, receiving coverage in the popular press. These effects included pre-recorded film inserts, pre-recorded narration, multiple sets, model work and animation, combined with the live studio performances. Drawing upon archival research into television production files and scripts as well as audience responses and periodical coverage of television at the time of broadcast, this article will focus on telefantasy. This genre offered particular opportunities for utilising effects in ways that seemed appropriate for the experimentation with the form of television and for the drama narratives. This period also saw a variety of shifts within television as the BBC sought to determine a specific identity and understand the possibilities for the new medium.
This research also incorporates the BBC's own research and internal dialogue concerning audiences and how their tastes should best be met, at a time when the television audience was not only growing in terms of number but was also expanding geographically and socially beyond the moneyed Londoners who could afford the first television sets and were within range of the Alexandra Palace transmissions. The primary case study for this article will be the 1949 production of H.G.Wells’ The Time Machine, which incorporated pre-recorded audio and film inserts, which expanded the narrative out of the live studio performance both temporally and spatially, with the effects work receiving coverage in the popular magazine Illustrated. Other productions considered will be the 1938 and 1948 productions of RUR, the 1948 production of Blithe Spirit, and the 1950 adaptation of The Strange Case of Dr Jekyll and Mr Hyde. Despite the focus on telefantasy, this article will also include examples from other genres, both dramatic and factual, showing how the BBC's response to the changing television audience was to restrict drama to a more 'realistic' aesthetic and to move experimentation with televisual form to non-drama productions such as variety performances.
Resumo:
This article explores the conformation in discourse of a verbal exchange and its subsequent mediatised and legal ramifications. The event concerns an allegedly racist insult directed by high profile English professional footballer John Terry towards another player, Anton Ferdinand, during a televised match in October 2011. The substance of Terry’s utterance, which included the noun phrase ‘fucking black cunt’, was found by a Chief Magistrate not to be a racist insult, although the fact that these actual words were framed within the utterance was not in dispute. The upshot of this ruling was that Terry was acquitted of a racially aggravated public order offence. A subsequent investigation by the regulatory commission of the English Football Association (FA) ruled, almost a year after the event, that Terry was guilty of racially abusing Ferdinand. Terry was banned for four matches and fined £220,000.
It is our contention that this event, played out in legal rulings, social media and print and broadcast media, constitutes a complex web of linguistic structures and strategies in discourse, and as such lends itself well to analysis with a broad range of tools from pragmatics, discourse analysis and cognitive linguistics. Amongst other things, such an analysis can help explain the seemingly anomalous - even contradictory - position adopted in the legal ruling with regard to the speech act status of ‘fucking black cunt’; namely, that the racist content of the utterance was not contested but that the speaker was found not to have issued a racist insult. Over its course, the article addresses this broader issue by making reference to the systemic-functional interpersonal function of language, particularly to the concepts of modality, polarity and modalisation. It also draws on models of verbal irony from linguistic pragmatics, notably from the theory of irony as echoic mention (c.f. Sperber and Wilson, 1981; Wilson and Sperber, 1992). Furthermore, the article makes use of the cognitive-linguistic framework, Text World Theory (c.f. Gavins, 2007; Werth, 1999) to examine the discourse positions occupied by key actors and adapts, from cognitive poetics, the theory of mind-modelling (c.f. Stockwell, 2009) to explore the conceptual means through which these actors discursively negotiate the event.
It is argued that the pragmatic and cognitive strategies that frame the entire incident go a long way towards mitigating the impact of so ostensibly stark an act of racial abuse. Moreover, it is suggested here that the reconciliation of Terry’s action was a result of the confluence of strategies of discourse with relations of power as embodied by the media, the law and perceptions of nationhood embraced by contemporary football culture. It is further proposed that the outcome of this episode, where the FA was put in the spotlight, and where both the conflict and its key antagonists were ‘intranational’, was strongly impelled by the institution of English football and its governing body both to reproduce and maintain social, cultural and ethnic cohesion and to avoid any sense that the event featured a discernable ‘out-group’.
Resumo:
While video surveillance systems have become ubiquitous in our daily lives, they have introduced concerns over privacy invasion. Recent research to address these privacy issues includes a focus on privacy region protection, whereby existing video scrambling techniques are applied to specific regions of interest (ROI) in a video while the background is left unchanged. Most previous work in this area has only focussed on encrypting the sign bits of nonzero coefficients in the privacy region, which produces a relatively weak scrambling effect. In this paper, to enhance the scrambling effect for privacy protection, it is proposed to encrypt the intra prediction modes (IPM) in addition to the sign bits of nonzero coefficients (SNC) within the privacy region. A major issue with utilising encryption of IPM is that drift error is introduced outside the region of interest. Therefore, a re-encoding method, which is integrated with the encryption of IPM, is also proposed to remove drift error. Compared with a previous technique that uses encryption of IPM, the proposed re-encoding method offers savings in the bitrate overhead while completely removing the drift error. Experimental results and analysis based on H.264/AVC were carried out to verify the effectiveness of the proposed methods. In addition, a spiral binary mask mechanism is proposed that can reduce the bitrate overhead incurred by flagging the position of the privacy region. A definition of the syntax structure for the spiral binary mask is given. As a result of the proposed techniques, the privacy regions in a video sequence can be effectively protected by the enhanced scrambling effect with no drift error and a lower bitrate overhead.
Resumo:
In this reported work, the frequency diverse array concept is employed to construct an orthogonal frequency-division multiplexing (OFDM) transmitter that has the capability of securing wireless communication in free space directly in the physical-layer without the need for mathematical encryption. The characteristics of the proposed scheme in terms of its secrecy performance are validated via bit error rate simulation under both high and low signal to noise ratio scenarios using the IEEE 802.11 OFDM physical-layer specification.
Resumo:
WHIRLBOB, also known as STRIBOBr2, is an AEAD (Authenticated Encryption with Associated Data) algorithm derived from STRIBOBr1 and the Whirlpool hash algorithm. WHIRLBOB/STRIBOBr2 is a second round candidate in the CAESAR competition. As with STRIBOBr1, the reduced-size Sponge design has a strong provable security link with a standardized hash algorithm. The new design utilizes only the LPS or ρ component of Whirlpool in flexibly domain-separated BLNK Sponge mode. The number of rounds is increased from 10 to 12 as a countermeasure against Rebound Distinguishing attacks. The 8 ×8 - bit S-Box used by Whirlpool and WHIRLBOB is constructed from 4 ×4 - bit “MiniBoxes”. We report on fast constant-time Intel SSSE3 and ARM NEON SIMD WHIRLBOB implementations that keep full miniboxes in registers and access them via SIMD shuffles. This is an efficient countermeasure against AES-style cache timing side-channel attacks. Another main advantage of WHIRLBOB over STRIBOBr1 (and most other AEADs) is its greatly reduced implementation footprint on lightweight platforms. On many lower-end microcontrollers the total software footprint of π+BLNK = WHIRLBOB AEAD is less than half a kilobyte. We also report an FPGA implementation that requires 4,946 logic units for a single round of WHIRLBOB, which compares favorably to 7,972 required for Keccak / Keyak on the same target platform. The relatively small S-Box gate count also enables efficient 64-bit bitsliced straight-line implementations. We finally present some discussion and analysis on the relationships between WHIRLBOB, Whirlpool, the Russian GOST Streebog hash, and the recent draft Russian Encryption Standard Kuznyechik.
Resumo:
The continued use of traditional lecturing across Higher Education as the main teaching and learning approach in many disciplines must be challenged. An increasing number of studies suggest that this approach, compared to more active learning methods, is the least effective. In counterargument, the use of traditional lectures are often justified as necessary given a large student population. By analysing the implementation of a web based broadcasting approach which replaced the traditional lecture within a programming-based module, and thereby removed the student population rationale, it was hoped that the student learning experience would become more active and ultimately enhance learning on the module. The implemented model replaces the traditional approach of students attending an on-campus lecture theatre with a web-based live broadcast approach that focuses on students being active learners rather than passive recipients. Students ‘attend’ by viewing a live broadcast of the lecturer, presented as a talking head, and the lecturer’s desktop, via a web browser. Video and audio communication is primarily from tutor to students, with text-based comments used to provide communication from students to tutor. This approach promotes active learning by allowing student to perform activities on their own computer rather than the passive viewing and listening common encountered in large lecture classes. By analysing this approach over two years (n = 234 students) results indicate that 89.6% of students rated the approach as offering a highly positive learning experience. Comparing student performance across three academic years also indicates a positive change. A small data analytic analysis was conducted into student participation levels and suggests that the student cohort's willingness to engage with the broadcast lectures material is high.
Resumo:
In physical layer security systems there is a clear need to exploit the radio link characteristics to automatically generate an encryption key between two end points. The success of the key generation depends on the channel reciprocity, which is impacted by the non-simultaneous measurements and the white nature of the noise. In this paper, an OFDM subcarriers' channel responses based key generation system with enhanced channel reciprocity is proposed. By theoretically modelling the OFDM subcarriers' channel responses, the channel reciprocity is modelled and analyzed. A low pass filter is accordingly designed to improve the channel reciprocity by suppressing the noise. This feature is essential in low SNR environments in order to reduce the risk of the failure of the information reconciliation phase during key generation. The simulation results show that the low pass filter improves the channel reciprocity, decreases the key disagreement, and effectively increases the success of the key generation.
Resumo:
As cryptographic implementations are increasingly subsumed as functional blocks within larger systems on chip, it becomes more difficult to identify the power consumption signatures of cryptographic operations amongst other unrelated processing activities. In addition, at higher clock frequencies, the current decay between successive processing rounds is only partial, making it more difficult to apply existing pattern matching techniques in side-channel analysis. We show however, through the use of a phase-sensitive detector, that power traces can be pre-processed to generate a filtered output which exhibits an enhanced round pattern, enabling the identification of locations on a device where encryption operations are occurring and also assisting with the re-alignment of power traces for side-channel attacks.
Resumo:
Homomorphic encryption offers potential for secure cloud computing. However due to the complexity of homomorphic encryption schemes, performance of implemented schemes to date have been unpractical. This work investigates the use of hardware, specifically Field Programmable Gate Array (FPGA) technology, for implementing the building blocks involved in somewhat and fully homomorphic encryption schemes in order to assess the practicality of such schemes. We concentrate on the selection of a suitable multiplication algorithm and hardware architecture for large integer multiplication, one of the main bottlenecks in many homomorphic encryption schemes. We focus on the encryption step of an integer-based fully homomorphic encryption (FHE) scheme. We target the DSP48E1 slices available on Xilinx Virtex 7 FPGAs to ascertain whether the large integer multiplier within the encryption step of a FHE scheme could fit on a single FPGA device. We find that, for toy size parameters for the FHE encryption step, the large integer multiplier fits comfortably within the DSP48E1 slices, greatly improving the practicality of the encryption step compared to a software implementation. As multiplication is an important operation in other FHE schemes, a hardware implementation using this multiplier could also be used to improve performance of these schemes.
Resumo:
Arguably, the title of American Horror Story sets out an agenda for the program: this is not just a horror story, but it is a particularly American one. This chapter examines the way that the program uses seasonal celebrations as a way of expressing that national identity, with special emphasis on the importance of family to those celebrations. The particular seasonal celebrations focused on are those of Halloween and Christmas, each of which has associations with the supernatural. However, the use of the supernatural at those seasons is one which is particularly associated with the US, presenting Halloween as a time of supernatural incursion and horror, and of disruption to society and the normal order of things, while Christmas is presented more as a time of unity for the family. Where the supernatural emerges in American Christmas television, it is typically as a force to encourage togetherness and reconciliation, rather than as a dark reminder of the past. While these interpretations of these festivals have been broadcast abroad by American cultural products, not least American television, they have different associations and implications elsewhere, as will be shown. So the particular uses of these festivals is part of what marks American Horror Story out as American, as is the way that the program's narratives have been structured to fit in with US television scheduling. This chapter, then, argues that the structures of the narratives combines with their use of the festivals of Halloween and Christmas in order to enhance the sense of this series as a particularly American horror story.
Resumo:
With the rapid development of internet-of-things (IoT), face scrambling has been proposed for privacy protection during IoT-targeted image/video distribution. Consequently in these IoT applications, biometric verification needs to be carried out in the scrambled domain, presenting significant challenges in face recognition. Since face models become chaotic signals after scrambling/encryption, a typical solution is to utilize traditional data-driven face recognition algorithms. While chaotic pattern recognition is still a challenging task, in this paper we propose a new ensemble approach – Many-Kernel Random Discriminant Analysis (MK-RDA) to discover discriminative patterns from chaotic signals. We also incorporate a salience-aware strategy into the proposed ensemble method to handle chaotic facial patterns in the scrambled domain, where random selections of features are made on semantic components via salience modelling. In our experiments, the proposed MK-RDA was tested rigorously on three human face datasets: the ORL face dataset, the PIE face dataset and the PUBFIG wild face dataset. The experimental results successfully demonstrate that the proposed scheme can effectively handle chaotic signals and significantly improve the recognition accuracy, making our method a promising candidate for secure biometric verification in emerging IoT applications.
Resumo:
Cryptographic algorithms have been designed to be computationally secure, however it has been shown that when they are implemented in hardware, that these devices leak side channel information that can be used to mount an attack that recovers the secret encryption key. In this paper an overlapping window power spectral density (PSD) side channel attack, targeting an FPGA device running the Advanced Encryption Standard is proposed. This improves upon previous research into PSD attacks by reducing the amount of pre-processing (effort) required. It is shown that the proposed overlapping window method requires less processing effort than that of using a sliding window approach, whilst overcoming the issues of sampling boundaries. The method is shown to be effective for both aligned and misaligned data sets and is therefore recommended as an improved approach in comparison with existing time domain based correlation attacks.