18 resultados para Hardware-based security
em Aston University Research Archive
Resumo:
The port industry is facing a dramatic wave of changes that have transformed the structure of the industry. Modern seaports are increasingly shifting from a “hardware-based” approach towards “knowhow intensive” configuration. In this context knowledge resources, learning processes and training initiatives increasingly represent key elements to guarantee the quality of service supplied and hence the competitiveness of modern seaport communities. This paper describes the learning needs analysis conducted amongst key port community actors in three ports in the south east of Ireland during 2005 in the context of the I-Sea.Net project. It goes on to describe the learning requirements report and the training design carried out based on this analysis.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
Security and reliability of LDPC based public-key cryptosystems are discussed and analysed. We study attacks on the cryptosystem when partial knowledge of one or more of the private key components and/or of the plaintext have been acquired.
Resumo:
The focus of this study is development of parallelised version of severely sequential and iterative numerical algorithms based on multi-threaded parallel platform such as a graphics processing unit. This requires design and development of a platform-specific numerical solution that can benefit from the parallel capabilities of the chosen platform. Graphics processing unit was chosen as a parallel platform for design and development of a numerical solution for a specific physical model in non-linear optics. This problem appears in describing ultra-short pulse propagation in bulk transparent media that has recently been subject to several theoretical and numerical studies. The mathematical model describing this phenomenon is a challenging and complex problem and its numerical modeling limited on current modern workstations. Numerical modeling of this problem requires a parallelisation of an essentially serial algorithms and elimination of numerical bottlenecks. The main challenge to overcome is parallelisation of the globally non-local mathematical model. This thesis presents a numerical solution for elimination of numerical bottleneck associated with the non-local nature of the mathematical model. The accuracy and performance of the parallel code is identified by back-to-back testing with a similar serial version.
Resumo:
The security and reliability of a class of public-key cryptosystems against attacks by unauthorized parties, who had acquired partial knowledge of one or more of the private key components and/or of the message, were discussed. The standard statistical mechanical methods of dealing with diluted spin systems with replica symmetric considerations were analyzed. The dynamical transition which defined decryption success in practical situation was studied. The phase diagrams which showed the dynamical threshold as a function of the partial acquired knowledge of the private key were also presented.
Resumo:
For the last several years, mobile devices and platform security threats, including wireless networking technology, have been top security issues. A departure has occurred from automatic anti-virus software based on traditional PC defense: risk management (authentication and encryption), compliance, and disaster recovery following polymorphic viruses and malware as the primary activities within many organizations and government services alike. This chapter covers research in Turkey as a reflection of the current market – e-government started officially in 2008. This situation in an emerging country presents the current situation and resistances encountered while engaging with mobile and e-government interfaces. The authors contend that research is needed to understand more precisely security threats and most of all potential solutions for sustainable future intention to use m-government services. Finally, beyond m-government initiatives' success or failure, the mechanisms related to public administration mobile technical capacity building and security issues are discussed.
Resumo:
The research is concerned with the terminological problems that computer users experience when they try to formulate their knowledge needs and attempt to access information contained in computer manuals or online help systems while building up their knowledge. This is the recognised but unresolved problem of communication between the specialist and the layman. The initial hypothesis was that computer users, through their knowledge of language, have some prior knowledge of the subdomain of computing they are trying to come to terms with, and that language can be a facilitating mechanism, or an obstacle, in the development of that knowledge. Related to this is the supposition that users have a conceptual apparatus based on both theoretical knowledge and experience of the world, and of several domains of special reference related to the environment in which they operate. The theoretical argument was developed by exploring the relationship between knowledge and language, and considering the efficacy of terms as agents of special subject knowledge representation. Having charted in a systematic way the territory of knowledge sources and types, we were able to establish that there are many aspects of knowledge which cannot be represented by terms. This submission is important, as it leads to the realisation that significant elements of knowledge are being disregarded in retrieval systems because they are normally expressed by language elements which do not enjoy the status of terms. Furthermore, we introduced the notion of `linguistic ease of retrieval' as a challenge to more conventional thinking which focuses on retrieval results.
Resumo:
This paper addresses the security of a specific class of common watermarking methods based on Dither modulation-quantisation index modulation (DM-QIM) and focusing on watermark-only attacks (WOA). The vulnerabilities of and probable attacks on lattice structure based watermark embedding methods have been presented in the literature. DM-QIM is one of the best known lattice structure based watermarking techniques. In this paper, the authors discuss a watermark-only attack scenario (the attacker has access to a single watermarked content only). In the literature it is an assumption that DM-QIM methods are secure to WOA. However, the authors show that the DM-QIM based embedding method is vulnerable against a guided key guessing attack by exploiting subtle statistical regularities in the feature space embeddings for time series and images. Using a distribution-free algorithm, this paper presents an analysis of the attack and numerical results for multiple examples of image and time series data.
Resumo:
This paper addresses the security of a specific class of common watermarking methods based on Dither modulation-quantisation index modulation (DM-QIM) and focusing on watermark-only attacks (WOA). The vulnerabilities of and probable attacks on lattice structure based watermark embedding methods have been presented in the literature. DM-QIM is one of the best known lattice structure based watermarking techniques. In this paper, the authors discuss a watermark-only attack scenario (the attacker has access to a single watermarked content only). In the literature it is an assumption that DM-QIM methods are secure to WOA. However, the authors show that the DM-QIM based embedding method is vulnerable against a guided key guessing attack by exploiting subtle statistical regularities in the feature space embeddings for time series and images. Using a distribution-free algorithm, this paper presents an analysis of the attack and numerical results for multiple examples of image and time series data.
Resumo:
Increasingly users are seen as the weak link in the chain, when it comes to the security of corporate information. Should the users of computer systems act in any inappropriate or insecure manner, then they may put their employers in danger of financial losses, information degradation or litigation, and themselves in danger of dismissal or prosecution. This is a particularly important concern for knowledge-intensive organisations, such as universities, as the effective conduct of their core teaching and research activities is becoming ever more reliant on the availability, integrity and accuracy of computer-based information resources. One increasingly important mechanism for reducing the occurrence of inappropriate behaviours, and in so doing, protecting corporate information, is through the formulation and application of a formal ‘acceptable use policy (AUP). Whilst the AUP has attracted some academic interest, it has tended to be prescriptive and overly focussed on the role of the Internet, and there is relatively little empirical material that explicitly addresses the purpose, positioning or content of real acceptable use policies. The broad aim of the study, reported in this paper, is to fill this gap in the literature by critically examining the structure and composition of a sample of authentic policies – taken from the higher education sector – rather than simply making general prescriptions about what they ought to contain. There are two important conclusions to be drawn from this study: (1) the primary role of the AUP appears to be as a mechanism for dealing with unacceptable behaviour, rather than proactively promoting desirable and effective security behaviours, and (2) the wide variation found in the coverage and positioning of the reviewed policies is unlikely to be fostering a coherent approach to security management, across the higher education sector.
Resumo:
Ensuring the security of corporate information, that is increasingly stored, processed and disseminated using information and communications technologies [ICTs], has become an extremely complex and challenging activity. This is a particularly important concern for knowledge-intensive organisations, such as universities, as the effective conduct of their core teaching and research activities is becoming ever more reliant on the availability, integrity and accuracy of computer-based information resources. One increasingly important mechanism for reducing the occurrence of security breaches, and in so doing, protecting corporate information, is through the formulation and application of a formal information security policy (InSPy). Whilst a great deal has now been written about the importance and role of the information security policy, and approaches to its formulation and dissemination, there is relatively little empirical material that explicitly addresses the structure or content of security policies. The broad aim of the study, reported in this paper, is to fill this gap in the literature by critically examining the structure and content of authentic information security policies, rather than simply making general prescriptions about what they ought to contain. Having established the structure and key features of the reviewed policies, the paper critically explores the underlying conceptualisation of information security embedded in the policies. There are two important conclusions to be drawn from this study: (1) the wide diversity of disparate policies and standards in use is unlikely to foster a coherent approach to security management; and (2) the range of specific issues explicitly covered in university policies is surprisingly low, and reflects a highly techno-centric view of information security management.
Resumo:
The potential benefits of implementing Component-Based Development (CBD) methodologies in a globally distributed environment are many. Lessons from the aeronautics, automotive, electronics and computer hardware industries, in which Component-Based (CB) architectures have been successfully employed for setting up globally distributed design and production activities, have consistently shown that firms have managed to increase the rate of reused components and sub-assemblies, and to speed up the design and production process of new products.
Resumo:
This paper proposes a methodological scheme for the photovoltaic (PV) simulator design. With the advantages of a digital controller system, linear interpolation is proposed for precise fitting with higher computational efficiency. A novel control strategy that directly tackles two different duty cycles is proposed and implemented to achieve a full-range operation including short circuit (SC) and open circuit (OC) conditions. Systematic design procedures for both hardware and algorithm are explained, and a prototype is built. Experimental results confirm an accurate steady state performance under different load conditions, including SC and OC. This low power apparatus can be adopted for PV education and research with a limited budget.
Resumo:
With the advent of GPS enabled smartphones, an increasing number of users is actively sharing their location through a variety of applications and services. Along with the continuing growth of Location-Based Social Networks (LBSNs), security experts have increasingly warned the public of the dangers of exposing sensitive information such as personal location data. Most importantly, in addition to the geographical coordinates of the user’s location, LBSNs allow easy access to an additional set of characteristics of that location, such as the venue type or popularity. In this paper, we investigate the role of location semantics in the identification of LBSN users. We simulate a scenario in which the attacker’s goal is to reveal the identity of a set of LBSN users by observing their check-in activity. We then propose to answer the following question: what are the types of venues that a malicious user has to monitor to maximize the probability of success? Conversely, when should a user decide whether to make his/her check-in to a location public or not? We perform our study on more than 1 million check-ins distributed over 17 urban regions of the United States. Our analysis shows that different types of venues display different discriminative power in terms of user identity, with most of the venues in the “Residence” category providing the highest re-identification success across the urban regions. Interestingly, we also find that users with a high entropy of their check-ins distribution are not necessarily the hardest to identify, suggesting that it is the collective behaviour of the users’ population that determines the complexity of the identification task, rather than the individual behaviour.
Resumo:
Purpose—This article considers North Korea and the notion of crisis, by linking historical development over the Korean peninsula to the conflict resolution literature, and investigates why despite a large number of destabilizing events, a war involving Pyongyang has yet to erupt. Design/methodology—This article uses historical data and a framework developed by Aggarwal et al., in order to highlight patterns of interaction between states such as the United States, North Korea and South Korea, organizations such as the United Nations, as well as processes such as the Six- Party Talks and the Agreed Framework. The article then develops a crisis framework based on conflict resolution and negotiation literature, and applies it to three North Korean administrations. Findings—Findings suggest that an open- ended understanding of time (for all parties involved on the peninsula) leads to an impossibility to reach a threshold where full- scale war would be triggered, thus leaving parties in a stable state of crisis for which escalating moves and de- escalating techniques might become irrelevant. Practical implications—It is hoped that this article will help further endeavors linking conflict resolution theoretical frameworks to the Korean peninsula security situation. In the case of the Korean peninsula, time has been understood as open-ended, leading parties to a lingering state of heightened hostilities that oscillates toward war, but that is controlled enough not to reach it. In-depth analysis of particular security sectors such as nuclear energy, food security, or missile testing would prove particularly useful in understanding the complexity of the Korean peninsula situation to a greater extent. It is hoped that this paper will help further endeavours linking conflict resolution theoretical frameworks to the Korean peninsula security situation. Originality/value—This research suggests that regarding the Korean peninsula, time has been understood as open- ended, leading parties to a lingering state of heightened.