993 resultados para Hardware IP Security


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most security models for authenticated key exchange (AKE) do not explicitly model the associated certification system, which includes the certification authority (CA) and its behaviour. However, there are several well-known and realistic attacks on AKE protocols which exploit various forms of malicious key registration and which therefore lie outside the scope of these models. We provide the first systematic analysis of AKE security incorporating certification systems (ASICS). We define a family of security models that, in addition to allowing different sets of standard AKE adversary queries, also permit the adversary to register arbitrary bitstrings as keys. For this model family we prove generic results that enable the design and verification of protocols that achieve security even if some keys have been produced maliciously. Our approach is applicable to a wide range of models and protocols; as a concrete illustration of its power, we apply it to the CMQV protocol in the natural strengthening of the eCK model to the ASICS setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Transport Layer Security (TLS) protocol is the most widely used security protocol on the Internet. It supports negotiation of a wide variety of cryptographic primitives through different cipher suites, various modes of client authentication, and additional features such as renegotiation. Despite its widespread use, only recently has the full TLS protocol been proven secure, and only the core cryptographic protocol with no additional features. These additional features have been the cause of several practical attacks on TLS. In 2009, Ray and Dispensa demonstrated how TLS renegotiation allows an attacker to splice together its own session with that of a victim, resulting in a man-in-the-middle attack on TLS-reliant applications such as HTTP. TLS was subsequently patched with two defence mechanisms for protection against this attack. We present the first formal treatment of renegotiation in secure channel establishment protocols. We add optional renegotiation to the authenticated and confidential channel establishment model of Jager et al., an adaptation of the Bellare--Rogaway authenticated key exchange model. We describe the attack of Ray and Dispensa on TLS within our model. We show generically that the proposed fixes for TLS offer good protection against renegotiation attacks, and give a simple new countermeasure that provides renegotiation security for TLS even in the face of stronger adversaries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in Information and Communication Technologies have the potential to improve many facets of modern healthcare service delivery. The implementation of electronic health records systems is a critical part of an eHealth system. Despite the potential gains, there are several obstacles that limit the wider development of electronic health record systems. Among these are the perceived threats to the security and privacy of patients’ health data, and a widely held belief that these cannot be adequately addressed. We hypothesise that the major concerns regarding eHealth security and privacy cannot be overcome through the implementation of technology alone. Human dimensions must be considered when analysing the provision of the three fundamental information security goals: confidentiality, integrity and availability. A sociotechnical analysis to establish the information security and privacy requirements when designing and developing a given eHealth system is important and timely. A framework that accommodates consideration of the legislative requirements and human perspectives in addition to the technological measures is useful in developing a measurable and accountable eHealth system. Successful implementation of this approach would enable the possibilities, practicalities and sustainabilities of proposed eHealth systems to be realised.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a comprehensive formal security framework for key derivation functions (KDF). The major security goal for a KDF is to produce cryptographic keys from a private seed value where the derived cryptographic keys are indistinguishable from random binary strings. We form a framework of five security models for KDFs. This consists of four security models that we propose: Known Public Inputs Attack (KPM, KPS), Adaptive Chosen Context Information Attack (CCM) and Adaptive Chosen Public Inputs Attack(CPM); and another security model, previously defined by Krawczyk [6], which we refer to as Adaptive Chosen Context Information Attack(CCS). These security models are simulated using an indistinguisibility game. In addition we prove the relationships between these five security models and analyse KDFs using the framework (in the random oracle model).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whether by using electronic banking, by using credit cards, or by synchronising a mobile telephone via Bluetooth to an in-car system, humans are a critical part in many cryptographic protocols daily. We reduced the gap that exists between the theory and the reality of the security of these cryptographic protocols involving humans, by creating tools and techniques for proofs and implementations of human-followable security. After three human research studies, we present a model for capturing human recognition; we provide a tool for generating values called Computer-HUman Recognisable Nonces (CHURNs); and we provide a model for capturing human perceptible freshness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of the current study was to develop a measurement of information security culture in developing countries such as Saudi Arabia. In order to achieve this goal, the study commenced with a comprehensive review of the literature, the outcome being the development of a conceptual model as a reference base. The literature review revealed a lack of academic and professional research into information security culture in developing countries and more specifically in Saudi Arabia. Given the increasing importance and significant investment developing countries are making in information technology, there is a clear need to investigate information security culture from developing countries perspective such as Saudi Arabia. Furthermore, our analysis indicated a lack of clear conceptualization and distinction between factors that constitute information security culture and factors that influence information security culture. Our research aims to fill this gap by developing and validating a measurement model of information security culture, as well as developing initial understanding of factors that influence security culture. A sequential mixed method consisting of a qualitative phase to explore the conceptualisation of information security culture, and a quantitative phase to validate the model is adopted for this research. In the qualitative phase, eight interviews with information security experts in eight different Saudi organisations were conducted, revealing that security culture can be constituted as reflection of security awareness, security compliance and security ownership. Additionally, the qualitative interviews have revealed that factors that influence security culture are top management involvement, policy enforcement, policy maintenance, training and ethical conduct policies. These factors were confirmed by the literature review as being critical and important for the creation of security culture and formed the basis for our initial information security culture model, which was operationalised and tested in different Saudi Arabian organisations. Using data from two hundred and fifty-four valid responses, we demonstrated the validity and reliability of the information security culture model through Exploratory Factor Analysis (EFA), followed by Confirmatory Factor Analysis (CFA.) In addition, using Structural Equation Modelling (SEM) we were further able to demonstrate the validity of the model in a nomological net, as well as provide some preliminary findings on the factors that influence information security culture. The current study contributes to the existing body of knowledge in two major ways: firstly, it develops an information security culture measurement model; secondly, it presents empirical evidence for the nomological validity for the security culture measurement model and discovery of factors that influence information security culture. The current study also indicates possible future related research needs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

GO423 was initiated in 2012 as part of a community effort to ensure the vitality of the Queensland Games Sector. In common with other industrialised nations, the game industry in Australia is a reasonably significant contributor to Gross National Product (GNP). Games are played in 92% of Australian homes and the average adult player has been playing them for at least twelve years with 26% playing for more than thirty years (Brand, 2011). Like the games and interactive entertainment industries in other countries, the Australian industry has its roots in the small team model of the 1980s. So, for example, Beam Software, which was established in Melbourne in 1980, was started by two people and Krome Studios was started in 1999 by three. Both these companies grew to employing over 100 people in their heydays (considered large by Antipodean standards), not by producing their own intellectual property (IP) but by content generation for off shore parent companies. Thus our bigger companies grew on a model of service provision and tended not to generate their own IP (Darchen, 2012). There are some no-table exceptions where IP has originated locally and been ac-quired by international companies but in the case of some of the works of which we are most proud, the Australian company took on the role of “Night Elf” – a convenience due to affordances of the time zone which allowed our companies to work while the parent companies slept in a different time zone. In the post GFC climate, the strong Australian dollar and the vulnerability of such service provision means that job security is virtually non-existent with employees invariably being on short-term contracts. These issues are exacerbated by the decline of middle-ground games (those which fall between the triple-A titles and the smaller games often produced for a casual audience). The response to this state of affairs has been the change in the Australian games industry to new recognition of its identity as a wider cultural sector and the rise (or return) of an increasing number of small independent game development companies. ’In-dies’ consist of small teams, often making games for mobile and casual platforms, that depend on producing at least one if not two games a year and who often explore more radical definitions of games as designed cultural objects. The need for innovation and creativity in the Australian context is seen as a vital aspect of the current changing scene where we see the emphasis on the large studio production model give way to an emerging cultural sector model where small independent teams are engaged in shorter design and production schedules driven by digital distribution. In terms of Quality of Life (QoL) this new digital distribution brings with it the danger of 'digital isolation' - a studio can work from home and deliver from home. Community events thus become increasingly important. The GO423 Symposium is a response to these perceived needs and the event is based on the understanding that our new small creative teams depend on the local community of practice in no small way. GO423 thus offers local industry participants the opportunity to talk to each other about their work, to talk to potential new members about their work and to show off their work in a small intimate situation, encouraging both feedback and support.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Even though web security protocols are designed to make computer communication secure, it is widely known that there is potential for security breakdowns at the human-machine interface. This paper examines findings from a qualitative study investigating the identification of security decisions used on the web. The study was designed to uncover how security is perceived in an individual user's context. Study participants were tertiary qualified individuals, with a focus on HCI designers, security professionals and the general population. The study identifies that security frameworks for the web are inadequate from an interaction perspective, with even tertiary qualified users having a poor or partial understanding of security, of which they themselves are acutely aware. The result is that individuals feel they must protect themselves on the web. The findings contribute a significant mapping of the ways in which individuals reason and act to protect themselves on the web. We use these findings to highlight the need to design for trust at three levels, and the need to ensure that HCI design does not impact on the users' main identified protection mechanism: separation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fundamental part of many authentication protocols which authenticate a party to a human involves the human recognizing or otherwise processing a message received from the party. Examples include typical implementations of Verified by Visa in which a message, previously stored by the human at a bank, is sent by the bank to the human to authenticate the bank to the human; or the expectation that humans will recognize or verify an extended validation certificate in a HTTPS context. This paper presents general definitions and building blocks for the modelling and analysis of human recognition in authentication protocols, allowing the creation of proofs for protocols which include humans. We cover both generalized trawling and human-specific targeted attacks. As examples of the range of uses of our construction, we use the model presented in this paper to prove the security of a mutual authentication login protocol and a human-assisted device pairing protocol.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.