940 resultados para security requirement engineering
Resumo:
Knowledge Systems Institute Graduate School
Resumo:
The exchange of information between the police and community partners forms a central aspect of effective community service provision. In the context of policing, a robust and timely communications mechanism is required between police agencies and community partner domains, including: Primary healthcare (such as a Family Physician or a General Practitioner); Secondary healthcare (such as hospitals); Social Services; Education; and Fire and Rescue services. Investigations into high-profile cases such as the Victoria Climbié murder in 2000, the murders of Holly Wells and Jessica Chapman in 2002, and, more recently, the death of baby Peter Connelly through child abuse in 2007, highlight the requirement for a robust information-sharing framework. This paper presents a novel syntax that supports information-sharing requests, within strict data-sharing policy definitions. Such requests may form the basis for any information-sharing agreement that can exist between the police and their community partners. It defines a role-based architecture, with partner domains, with a syntax for the effective and efficient information sharing, using SPoC (Single Point-of-Contact) agents to control in-formation exchange. The application of policy definitions using rules within these SPoCs is inspired by network firewall rules and thus define information exchange permissions. These rules can be imple-mented by software filtering agents that act as information gateways between partner domains. Roles are exposed from each domain to give the rights to exchange information as defined within the policy definition. This work involves collaboration with the Scottish Police, as part of the Scottish Institute for Policing Research (SIPR), and aims to improve the safety of individuals by reducing risks to the community using enhanced information-sharing mechanisms.
Resumo:
The proposed research will focus on developing a novel approach to solve Software Service Evolution problems in Computing Clouds. The approach will support dynamic evolution of the software service in clouds via a set of discovered evolution patterns. An initial survey informed us that such an approach does not exist yet and is in urgent need. Evolution Requirement can be classified into evolution features; researchers can describe the whole requirement by using evolution feature typology, the typology will define the relation and dependency between each features. After the evolution feature typology has been constructed, evolution model will be created to make the evolution more specific. Aspect oriented approach can be used for enhance evolution feature-model modularity. Aspect template code generation technique will be used for model transformation in the end. Product Line Engineering contains all the essential components for driving the whole evolution process.
Resumo:
Existing work in Computer Science and Electronic Engineering demonstrates that Digital Signal Processing techniques can effectively identify the presence of stress in the speech signal. These techniques use datasets containing real or actual stress samples i.e. real-life stress such as 911 calls and so on. Studies that use simulated or laboratory-induced stress have been less successful and inconsistent. Pervasive, ubiquitous computing is increasingly moving towards voice-activated and voice-controlled systems and devices. Speech recognition and speaker identification algorithms will have to improve and take emotional speech into account. Modelling the influence of stress on speech and voice is of interest to researchers from many different disciplines including security, telecommunications, psychology, speech science, forensics and Human Computer Interaction (HCI). The aim of this work is to assess the impact of moderate stress on the speech signal. In order to do this, a dataset of laboratory-induced stress is required. While attempting to build this dataset it became apparent that reliably inducing measurable stress in a controlled environment, when speech is a requirement, is a challenging task. This work focuses on the use of a variety of stressors to elicit a stress response during tasks that involve speech content. Biosignal analysis (commercial Brain Computer Interfaces, eye tracking and skin resistance) is used to verify and quantify the stress response, if any. This thesis explains the basis of the author’s hypotheses on the elicitation of affectively-toned speech and presents the results of several studies carried out throughout the PhD research period. These results show that the elicitation of stress, particularly the induction of affectively-toned speech, is not a simple matter and that many modulating factors influence the stress response process. A model is proposed to reflect the author’s hypothesis on the emotional response pathways relating to the elicitation of stress with a required speech content. Finally the author provides guidelines and recommendations for future research on speech under stress. Further research paths are identified and a roadmap for future research in this area is defined.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
A novel wireless local area network (WLAN) security processor is described in this paper. It is designed to offload security encapsulation processing from the host microprocessor in an IEEE 802.11i compliant medium access control layer to a programmable hardware accelerator. The unique design, which comprises dedicated cryptographic instructions and hardware coprocessors, is capable of performing wired equivalent privacy, temporal key integrity protocol, counter mode with cipher block chaining message authentication code protocol, and wireless robust authentication protocol. Existing solutions to wireless security have been implemented on hardware devices and target specific WLAN protocols whereas the programmable security processor proposed in this paper provides support for all WLAN protocols and thus, can offer backwards compatibility as well as future upgrade ability as standards evolve. It provides this additional functionality while still achieving equivalent throughput rates to existing architectures. © 2006 IEEE.
Resumo:
A system capable of deployment as a microwave security sensor which can automatically reject background clutter is presented. The principle of operation is based on analog homodyne detection using 1. Q single side-band down conversion of an AM backscattered modulating signal envelope. A demonstrator is presented which operates with a carrier frequency of 2 GHz and 500 KHz backscattered signal. When deployed in a multipath rich open plan office environment the S/N ratio obtained at the detection output was better than 20 dB at 20 in range with 20 dBm EIRP in a 2 MHz detection bandwidth despite the presence of time varying and static clutter. (C) 2009 Wiley Periodicals, Inc. Microwave Opt Technol Lett 51: 2492-2495, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.24636
Resumo:
The identification and classification of network traffic and protocols is a vital step in many quality of service and security systems. Traffic classification strategies must evolve, alongside the protocols utilising the Internet, to overcome the use of ephemeral or masquerading port numbers and transport layer encryption. This research expands the concept of using machine learning on the initial statistics of flow of packets to determine its underlying protocol. Recognising the need for efficient training/retraining of a classifier and the requirement for fast classification, the authors investigate a new application of k-means clustering referred to as 'two-way' classification. The 'two-way' classification uniquely analyses a bidirectional flow as two unidirectional flows and is shown, through experiments on real network traffic, to improve classification accuracy by as much as 18% when measured against similar proposals. It achieves this accuracy while generating fewer clusters, that is, fewer comparisons are needed to classify a flow. A 'two-way' classification offers a new way to improve accuracy and efficiency of machine learning statistical classifiers while still maintaining the fast training times associated with the k-means.
Resumo:
This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms.
Resumo:
A dynamic global security-aware synthesis flow using the SystemC language is presented. SystemC security models are first specified at the system or behavioural level using a library of SystemC behavioural descriptions which provide for the reuse and extension of security modules. At the core of the system is incorporated a global security-aware scheduling algorithm which allows for scheduling to a mixture of components of varying security level. The output from the scheduler is translated into annotated nets which are subsequently passed to allocation, optimisation and mapping tools for mapping into circuits. The synthesised circuits incorporate asynchronous secure power-balanced and fault-protected components. Results show that the approach offers robust implementations and efficient security/area trade-offs leading to significant improvements in turnover.
Resumo:
The requirement for the use of Virtual Engineering, encompassing the construction of Virtual Prototypes using Multidisciplinary Design Optimisation, for the development of future aerospace platforms and systems is discussed. Some of the activities at the Virtual Engineering Centre, a University of Liverpool initiative, are described and a number of case studies involving a range of applications of Virtual Engineering illustrated.
Resumo:
Passive equipments operating in the 30-300 GHZ (millimeter wave) band are compared to those in the 300 GHz-3 THz (submillimeter band). Equipments operating in the submillimeter band can measure distance and also spectral information and have been used to address new opportunities in security. Solid state spectral information is available in the submillimeter region making it possible to identify materials, whereas in millimeter region bulk optical properties determine the image contrast. The optical properties in the region from 30 GHz to 3 THz are discussed for some typical inorganic and organic solids. in the millimeter-wave region of the spectrum, obscurants such as poor weather, dust, and smoke can be penetrated and useful imagery generated for surveillance. in the 30 GHZ-3 THZ region dielectrics such as plastic and cloth are also transparent and the detection of contraband hidden under clothing is possible. A passive millimeter-wave imaging concept based on a folded Schmidt camera has been developed and applied to poor weather navigation and security. The optical design uses a rotating mirror and is folded using polarization techniques. The design is very well corrected over a wide field of view making it ideal for surveillance, and security. This produces a relatively compact imager which minimizes the receiver count.
Resumo:
It is well known that millimetre waves can pass through clothing. In short range applications such as in the scanning of people for security purposes, operating at W band can be an advantage. The size of the equipment is decreased when compared to operation at Ka band and the equipments have similar performance.
In this paper a W band mechanically scanned imager designed for imaging weapons and contraband hidden under clothing is discussed. This imager is based on a modified folded conical scan technology previously reported. In this design an additional optical element is added to give a Cassegrain configuration in image space. This increases the effective focal length and enables improved sampling of the image and provides more space for the receivers. This imager is constructed from low cost materials such as polystyrene, polythene and printed circuit board materials. The trade off between image spatial resolution and thermal sensitivity is discussed.