924 resultados para hex meshing schemes
Resumo:
This research used the Queensland Police Service, Australia, as a major case study. Information on principles, techniques and processes used, and the reason for the recording, storing and release of audit information for evidentiary purposes is reported. It is shown that Law Enforcement Agencies have a two-fold interest in, and legal obligation pertaining to, audit trails. The first interest relates to the situation where audit trails are actually used by criminals in the commission of crime and the second to where audit trails are generated by the information systems used by the police themselves in support of the recording and investigation of crime. Eleven court cases involving Queensland Police Service audit trails used in evidence in Queensland courts were selected for further analysis. It is shown that, of the cases studied, none of the evidence presented was rejected or seriously challenged from a technical perspective. These results were further analysed and related to normal requirements for trusted maintenance of audit trail information in sensitive environments with discussion on the ability and/or willingness of courts to fully challenge, assess or value audit evidence presented. Managerial and technical frameworks for firstly what is considered as an environment where a computer system may be considered to be operating “properly” and, secondly, what aspects of education, training, qualifications, expertise and the like may be considered as appropriate for persons responsible within that environment, are both proposed. Analysis was undertaken to determine if audit and control of information in a high security environment, such as law enforcement, could be judged as having improved, or not, in the transition from manual to electronic processes. Information collection, control of processing and audit in manual processes used by the Queensland Police Service, Australia, in the period 1940 to 1980 was assessed against current electronic systems essentially introduced to policing in the decades of the 1980s and 1990s. Results show that electronic systems do provide for faster communications with centrally controlled and updated information readily available for use by large numbers of users who are connected across significant geographical locations. However, it is clearly evident that the price paid for this is a lack of ability and/or reluctance to provide improved audit and control processes. To compare the information systems audit and control arrangements of the Queensland Police Service with other government departments or agencies, an Australia wide survey was conducted. Results of the survey were contrasted with the particular results of a survey, conducted by the Australian Commonwealth Privacy Commission four years previous, to this survey which showed that security in relation to the recording of activity against access to information held on Australian government computer systems has been poor and a cause for concern. However, within this four year period there is evidence to suggest that government organisations are increasingly more inclined to generate audit trails. An attack on the overall security of audit trails in computer operating systems was initiated to further investigate findings reported in relation to the government systems survey. The survey showed that information systems audit trails in Microsoft Corporation's “Windows” operating system environments are relied on quite heavily. An audit of the security for audit trails generated, stored and managed in the Microsoft “Windows 2000” operating system environment was undertaken and compared and contrasted with similar such audit trail schemes in the “UNIX” and “Linux” operating systems. Strength of passwords and exploitation of any security problems in access control were targeted using software tools that are freely available in the public domain. Results showed that such security for the “Windows 2000” system is seriously flawed and the integrity of audit trails stored within these environments cannot be relied upon. An attempt to produce a framework and set of guidelines for use by expert witnesses in the information technology (IT) profession is proposed. This is achieved by examining the current rules and guidelines related to the provision of expert evidence in a court environment, by analysing the rationale for the separation of distinct disciplines and corresponding bodies of knowledge used by the Medical Profession and Forensic Science and then by analysing the bodies of knowledge within the discipline of IT itself. It is demonstrated that the accepted processes and procedures relevant to expert witnessing in a court environment are transferable to the IT sector. However, unlike some discipline areas, this analysis has clearly identified two distinct aspects of the matter which appear particularly relevant to IT. These two areas are; expertise gained through the application of IT to information needs in a particular public or private enterprise; and expertise gained through accepted and verifiable education, training and experience in fundamental IT products and system.
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
Literally, the word compliance suggests conformity in fulfilling official requirements. The thesis presents the results of the analysis and design of a class of protocols called compliant cryptologic protocols (CCP). The thesis presents a notion for compliance in cryptosystems that is conducive as a cryptologic goal. CCP are employed in security systems used by at least two mutually mistrusting sets of entities. The individuals in the sets of entities only trust the design of the security system and any trusted third party the security system may include. Such a security system can be thought of as a broker between the mistrusting sets of entities. In order to provide confidence in operation for the mistrusting sets of entities, CCP must provide compliance verification mechanisms. These mechanisms are employed either by all the entities or a set of authorised entities in the system to verify the compliance of the behaviour of various participating entities with the rules of the system. It is often stated that confidentiality, integrity and authentication are the primary interests of cryptology. It is evident from the literature that authentication mechanisms employ confidentiality and integrity services to achieve their goal. Therefore, the fundamental services that any cryptographic algorithm may provide are confidentiality and integrity only. Since controlling the behaviour of the entities is not a feasible cryptologic goal,the verification of the confidentiality of any data is a futile cryptologic exercise. For example, there exists no cryptologic mechanism that would prevent an entity from willingly or unwillingly exposing its private key corresponding to a certified public key. The confidentiality of the data can only be assumed. Therefore, any verification in cryptologic protocols must take the form of integrity verification mechanisms. Thus, compliance verification must take the form of integrity verification in cryptologic protocols. A definition of compliance that is conducive as a cryptologic goal is presented as a guarantee on the confidentiality and integrity services. The definitions are employed to provide a classification mechanism for various message formats in a cryptologic protocol. The classification assists in the characterisation of protocols, which assists in providing a focus for the goals of the research. The resulting concrete goal of the research is the study of those protocols that employ message formats to provide restricted confidentiality and universal integrity services to selected data. The thesis proposes an informal technique to understand, analyse and synthesise the integrity goals of a protocol system. The thesis contains a study of key recovery,electronic cash, peer-review, electronic auction, and electronic voting protocols. All these protocols contain message format that provide restricted confidentiality and universal integrity services to selected data. The study of key recovery systems aims to achieve robust key recovery relying only on the certification procedure and without the need for tamper-resistant system modules. The result of this study is a new technique for the design of key recovery systems called hybrid key escrow. The thesis identifies a class of compliant cryptologic protocols called secure selection protocols (SSP). The uniqueness of this class of protocols is the similarity in the goals of the member protocols, namely peer-review, electronic auction and electronic voting. The problem statement describing the goals of these protocols contain a tuple,(I, D), where I usually refers to an identity of a participant and D usually refers to the data selected by the participant. SSP are interested in providing confidentiality service to the tuple for hiding the relationship between I and D, and integrity service to the tuple after its formation to prevent the modification of the tuple. The thesis provides a schema to solve the instances of SSP by employing the electronic cash technology. The thesis makes a distinction between electronic cash technology and electronic payment technology. It will treat electronic cash technology to be a certification mechanism that allows the participants to obtain a certificate on their public key, without revealing the certificate or the public key to the certifier. The thesis abstracts the certificate and the public key as the data structure called anonymous token. It proposes design schemes for the peer-review, e-auction and e-voting protocols by employing the schema with the anonymous token abstraction. The thesis concludes by providing a variety of problem statements for future research that would further enrich the literature.
Resumo:
Various load compensation schemes proposed in literature assume that voltage source at point of common coupling (PCC) is stiff. In practice, however, the load is remote from a distribution substation and is supplied by a feeder. In the presence of feeder impedance, the PWM inverter switchings distort both the PCC voltage and the source currents. In this paper load compensation with such a non-stiff source is considered. A switching control of the voltage source inverter (VSI) based on state feedback is used for load compensation with non-stiff source. The design of the state feedback controller requires careful considerations in choosing a gain matrix and in the generation of reference quantities. These aspects are considered in this paper. Detailed simulation and experimental results are given to support the control design.
Resumo:
The flying capacitor multilevel inverter (FCMLI) is a multiple voltage level inverter topology intended for high-power and high-voltage operations at low distortion. It uses capacitors, called flying capacitors, to clamp the voltage across the power semiconductor devices. A method for controlling the FCMLI is proposed which ensures that the flying capacitor voltages remain nearly constant using the preferential charging and discharging of these capacitors. A static synchronous compensator (STATCOM) and a static synchronous series compensator (SSSC) based on five-level flying capacitor inverters are proposed. Control schemes for both the FACTS controllers are developed and verified in terms of voltage control, power flow control, and power oscillation damping when installed in a single-machine infinite bus (SMIB) system. Simulation studies are performed using PSCAD/EMTDC to validate the efficacy of the control scheme and the FCMLI-based flexible alternating current transmission system (FACTS) controllers.
Resumo:
In this paper, the commonly used switching schemes for sliding mode control of power converters is analyzed and designed in the frequency domain. Particular application of a distribution static compensator (DSTATCOM) in voltage control mode is investigated in a power distribution system. Tsypkin's method and describing function is used to obtain the switching conditions for the two-level and three-level voltage source inverters. Magnitude conditions of carrier signals are developed for robust switching of the inverter under carrier-based modulation scheme of sliding mode control. The existence of border collision bifurcation is identified to avoid the complex switching states of the inverter. The load bus voltage of an unbalanced three-phase nonstiff radial distribution system is controlled using the proposed carrier-based design. The results are validated using PSCAD/EMTDC simulation studies and through a scaled laboratory model of DSTATCOM that is developed for experimental verification
Resumo:
The use of artificial neural networks (ANNs) to identify and control induction machines is proposed. Two systems are presented: a system to adaptively control the stator currents via identification of the electrical dynamics, and a system to adaptively control the rotor speed via identification of the mechanical and current-fed system dynamics. Both systems are inherently adaptive as well as self-commissioning. The current controller is a completely general nonlinear controller which can be used together with any drive algorithm. Various advantages of these control schemes over conventional schemes are cited, and the combined speed and current control scheme is compared with the standard vector control scheme
Resumo:
This paper proposes the use of artificial neural networks (ANNs) to identify and control an induction machine. Two systems are presented: a system to adaptively control the stator currents via identification of the electrical dynamics; and a system to adaptively control the rotor speed via identification of the mechanical and current-fed system dynamics. Various advantages of these control schemes over other conventional schemes are cited and the performance of the combined speed and current control scheme is compared with that of the standard vector control scheme
Resumo:
In rural low-voltage networks, distribution lines are usually highly resistive. When many distributed generators are connected to such lines, power sharing among them is difficult when using conventional droop control, as the real and reactive power have strong coupling with each other. A high droop gain can alleviate this problem but may lead the system to instability. To overcome4 this, two droop control methods are proposed for accurate load sharing with frequency droop controller. The first method considers no communication among the distributed generators and regulates the output voltage and frequency, ensuring acceptable load sharing. The droop equations are modified with a transformation matrix based on the line R/X ration for this purpose. The second proposed method, with minimal low bandwidth communication, modifies the reference frequency of the distributed generators based on the active and reactive power flow in the lines connected to the points of common coupling. The performance of these two proposed controllers is compared with that of a controller, which includes an expensive high bandwidth communication system through time-domain simulation of a test system. The magnitude of errors in power sharing between these three droop control schemes are evaluated and tabulated.
Resumo:
On-board mass (OBM) monitoring devices on heavy vehicles (HVs) have been tested in a national programme jointly by Transport Certification Australia Limited and the National Transport Commission. The tests were for, amongst other parameters, accuracy and tamper-evidence. The latter by deliberately tampering with the signals from OBM primary transducers during the tests. The OBM feasibility team is analysing dynamic data recorded at the primary transducers of OBM systems to determine if it can be used to detect tamper events. Tamper-evidence of current OBM systems needs to be determined if jurisdictions are to have confidence in specifying OBM for HVs as part of regulatory schemes. An algorithm has been developed to detect tamper events. The results of its application are detailed here.
Resumo:
Continuous biometric authentication schemes (CBAS) are built around the biometrics supplied by user behavioural characteristics and continuously check the identity of the user throughout the session. The current literature for CBAS primarily focuses on the accuracy of the system in order to reduce false alarms. However, these attempts do not consider various issues that might affect practicality in real world applications and continuous authentication scenarios. One of the main issues is that the presented CBAS are based on several samples of training data either of both intruder and valid users or only the valid users' profile. This means that historical profiles for either the legitimate users or possible attackers should be available or collected before prediction time. However, in some cases it is impractical to gain the biometric data of the user in advance (before detection time). Another issue is the variability of the behaviour of the user between the registered profile obtained during enrollment, and the profile from the testing phase. The aim of this paper is to identify the limitations in current CBAS in order to make them more practical for real world applications. Also, the paper discusses a new application for CBAS not requiring any training data either from intruders or from valid users.
Resumo:
The detached housing scheme is a unique and exclusive segment of the residential property market in Malaysia. Generally, the product is expensive and for many Malaysians who can afford them, owning a detached house is a once in a lifetime opportunity. In spite of this, most of the owners fail to fully comprehend the specific need of this type of housing scheme, increasing the risk of it being a problematic project. Unlike other types of pre-designed ‘mass housing’ schemes, the detached housing scheme may be built specifically to cater the needs and demands of its owner. Therefore, maximum owner participation is vital as the development progresses to guarantee the success of the project. In addition, due to it’s unique design the house would have to individually comply with the requirements and regulations of relevant authorities. Failure of owner to recognise this will result in delays, fines and penalties, disputes and ultimately cost overruns. These circumstances highlight the need for a model to guide the owner through the entire development process of a detached house. Therefore, this research aims to develop a model for a successful detached housing development in Malaysia through maximising owner participation during it’s various development stages. To achieve this, questionnaire surveys and case studies methods shall be employed to acquire the detached housing owners’ experiences in developing their detached houses in Malaysia. Relevant statistical tools shall be applied to analyse the responses. The results gained from this study shall be synthesised into a model of successful detached housing development for the reference of future detached housing owners in Malaysia.
Resumo:
This paper introduces an event-based traffic model for railway systems adopting fixed-block signalling schemes. In this model, the events of trains' arrival at and departure from signalling blocks constitute the states of the traffic flow. A state transition is equivalent to the progress of the trains by one signalling block and it is realised by referring to past and present states, as well as a number of pre-calculated look-up tables of run-times in the signalling block under various signalling conditions. Simulation results are compared with those from a time-based multi-train simulator to study the improvement of processing time and accuracy.
Resumo:
The demand for high quality rail services in the twenty-first century has put an ever increasing demand on all rail operators. In order to meet the expectation of their patrons, the maintenance regime of railway systems has to be tightened up, the track conditions have to be well looked after, the rolling stock must be designed to withstand heavy duty. In short, in an ideal world where resources are unlimited, one needs to implement a very rigorous inspection regime in order to take care of the modem needs of a railway system [1]. If cost were not an issue, the maintenance engineers could inspect the train body by the most up-to-date techniques such as ultra-sound examination, x-ray inspection, magnetic particle inspection, etc. on a regular basis. However it is inconceivable to have such a perfect maintenance regime in any commercial railway. Likewise, it is impossible to have a perfect rolling stock which can weather all the heavy duties experienced in a modem railway. Hence it is essential that some condition monitoring schemes are devised to pick up potential defects which could manifest into safety hazards. This paper introduces an innovative condition monitoring system for track profile and, together with an instrumented car to carry out surveillance of the track, will provide a comprehensive railway condition monitoring system which is free from the usual difficulty of electromagnetic compatibility issues in a typical railway environment