978 resultados para level scheme
Resumo:
Crashes at rail level crossings represent a significant problem, both in Australia and worldwide. Advances in driving assessment methods, such as the provision of on-road instrumented test vehicles, now provide researchers with the opportunity to further understand driver behaviour at rail level crossings in ways not previously possible. This paper gives an overview of a recent on-road pilot study of driver behaviour at rail level crossings in which 25 participants drove a pre-determined route, incorporating 4 rail level crossings, using MUARC's instrumented On-Road Test Vehicle (ORTeV). Drivers provided verbal commentary whilst driving the route, and a range of other data were collected, including eye fixations, forward, cockpit and driver video, and vehicle data (speed, braking, steering wheel angle, lane tracking etc). Participants also completed a post trial cognitive task analysis interview. Extracts from the wider analyses are used to examine in depth driver behaviour at one of the rail level crossings encountered during the study. The analysis presented, along with the overall analysis undertaken, gives insight into the driver and wider systems factors that shape behaviour at rail level crossings, and highlights the utility of using a multi-method, instrumented vehicle approach for gathering data regarding driver behaviour in different contexts.
Resumo:
US state-based data breach notification laws have unveiled serious corporate and government failures regarding the security of personal information. These laws require organisations to notify persons who may be affected by an unauthorized acquisition of their personal information. Safe harbours to notification exist if personal information is encrypted. Three types of safe harbour have been identified in the literature: exemptions, rebuttable presumptions and factors. The underlying assumption of exemptions is that encrypted personal information is secure and therefore unauthorized access does not pose a risk. However, the viability of this assumption is questionable when examined against data breaches involving encrypted information and the demanding practical requirements of effective encryption management. Recent recommendations by the Australian Law Reform Commission (ALRC) would amend the Privacy Act 1988 (Cth) to implement a data breach scheme that includes a different type of safe harbour, factor based analysis. The authors examine the potential capability of the ALRC’s proposed encryption safe harbour in relation to the US experience at the state legislature level.
Resumo:
The ad hoc networks are vulnerable to attacks due to distributed nature and lack of infrastructure. Intrusion detection systems (IDS) provide audit and monitoring capabilities that offer the local security to a node and help to perceive the specific trust level of other nodes. The clustering protocols can be taken as an additional advantage in these processing constrained networks to collaboratively detect intrusions with less power usage and minimal overhead. Existing clustering protocols are not suitable for intrusion detection purposes, because they are linked with the routes. The route establishment and route renewal affects the clusters and as a consequence, the processing and traffic overhead increases due to instability of clusters. The ad hoc networks are battery and power constraint, and therefore a trusted monitoring node should be available to detect and respond against intrusions in time. This can be achieved only if the clusters are stable for a long period of time. If the clusters are regularly changed due to routes, the intrusion detection will not prove to be effective. Therefore, a generalized clustering algorithm has been proposed that can run on top of any routing protocol and can monitor the intrusions constantly irrespective of the routes. The proposed simplified clustering scheme has been used to detect intrusions, resulting in high detection rates and low processing and memory overhead irrespective of the routes, connections, traffic types and mobility of nodes in the network. Clustering is also useful to detect intrusions collaboratively since an individual node can neither detect the malicious node alone nor it can take action against that node on its own.
Resumo:
Mobile ad-hoc networks (MANETs) are temporary wireless networks useful in emergency rescue services, battlefields operations, mobile conferencing and a variety of other useful applications. Due to dynamic nature and lack of centralized monitoring points, these networks are highly vulnerable to attacks. Intrusion detection systems (IDS) provide audit and monitoring capabilities that offer the local security to a node and help to perceive the specific trust level of other nodes. We take benefit of the clustering concept in MANETs for the effective communication between nodes, where each cluster involves a number of member nodes and is managed by a cluster-head. It can be taken as an advantage in these battery and memory constrained networks for the purpose of intrusion detection, by separating tasks for the head and member nodes, at the same time providing opportunity for launching collaborative detection approach. The clustering schemes are generally used for the routing purposes to enhance the route efficiency. However, the effect of change of a cluster tends to change the route; thus degrades the performance. This paper presents a low overhead clustering algorithm for the benefit of detecting intrusion rather than efficient routing. It also discusses the intrusion detection techniques with the help of this simplified clustering scheme.
Resumo:
Clinical experience plays an important role in the development of expertise, particularly when coupled with reflection on practice. There is debate, however, regarding the amount of clinical experience that is required to become an expert. Various lengths of practice have been suggested as suitable for determining expertise, ranging from five years to 15 years. This study aimed to investigate the association between length of experience and therapists’ level of expertise in the field of cerebral palsy with upper limb hypertonicity using an empirical procedure named Cochrane–Weiss–Shanteau (CWS). The methodology involved re-analysis of quantitative data collected in two previous studies. In Study 1, 18 experienced occupational therapists made hypothetical clinical decisions related to 110 case vignettes, while in Study 2, 29 therapists considered 60 case vignettes drawn randomly from those used in Study 1. A CWS index was calculated for each participant's case decisions. Then, in each study, Spearman's rho was calculated to identify the correlations between the duration of experience and level of expertise. There was no significant association between these two variables in both studies. These analyses corroborated previous findings of no association between length of experience and judgemental performance. Therefore, length of experience may not be an appropriate criterion for determining level of expertise in relation to cerebral palsy practice.
Resumo:
Osteoporotic spinal fractures are a major concern in ageing Western societies. This study develops a multi-scale finite element (FE) model of the osteoporotic lumbar vertebral body to study the mechanics of vertebral compression fracture at both the apparent (whole vertebral body) and micro-structural (internal trabecular bone core)levels. Model predictions were verified against experimental data, and found to provide a reasonably good representation of the mechanics of the osteoporotic vertebral body. This novel modelling methodology will allow detailed investigation of how trabecular bone loss in osteoporosis affects vertebral stiffness and strength in the lumbar spine.
Resumo:
RFID has been widely used in today's commercial and supply chain industry, due to the significant advantages it offers and the relatively low production cost. However, this ubiquitous technology has inherent problems in security and privacy. This calls for the development of simple, efficient and cost effective mechanisms against a variety of security threats. This paper proposes a two-step authentication protocol based on the randomized hash-lock scheme proposed by S. Weis in 2003. By introducing additional measures during the authentication process, this new protocol proves to enhance the security of RFID significantly, and protects the passive tags from almost all major attacks, including tag cloning, replay, full-disclosure, tracking, and eavesdropping. Furthermore, no significant changes to the tags is required to implement this protocol, and the low complexity level of the randomized hash-lock algorithm is retained.
Resumo:
Islanded operation, protection, reclosing and arc extinguishing are some of the challenging issues related to the connection of converter interfaced distributed generators (DGs) into a distribution network. The isolation of upstream faults in grid connected mode and fault detection in islanded mode using overcurrent devices are difficult. In the event of an arc fault, all DGs must be disconnected in order to extinguish the arc. Otherwise, they will continue to feed the fault, thus sustaining the arc. However, the system reliability can be increased by maximising the DG connectivity to the system: therefore, the system protection scheme must ensure that only the faulted segment is removed from the feeder. This is true even in the case of a radial feeder as the DG can be connected at various points along the feeder. In this paper, a new relay scheme is proposed which, along with a novel current control strategy for converter interfaced DGs, can isolate permanent and temporary arc faults. The proposed protection and control scheme can even coordinate with reclosers. The results are validated through PSCAD/EMTDC simulation and MATLAB calculations.
Resumo:
Multi-level concrete buildings requrre substantial temporary formwork structures to support the slabs during construction. The primary function of this formwork is to safely disperse the applied loads so that the slab being constructed, or the portion of the permanent structure already constructed, is not overloaded. Multi-level formwork is a procedure in which a limited number of formwork and shoring sets are cycled up the building as construction progresses. In this process, each new slab is supported by a number of lower level slabs. The new slab load is, essentially, distributed to these supporting slabs in direct proportion to their relative stiffness. When a slab is post-tensioned using draped tendons, slab lift occurs as a portion of the slab self-weight is balanced. The formwork and shores supporting that slab are unloaded by an amount equivalent to the load balanced by the post-tensioning. This produces a load distribution inherently different from that of a conventionally reinforced slab. Through , theoretical modelling and extensive on-site shore load measurement, this research examines the effects of post-tensioning on multilevel formwork load distribution. The research demonstrates that the load distribution process for post-tensioned slabs allows for improvements to current construction practice. These enhancements include a shortening of the construction period; an improvement in the safety of multi-level form work operations; and a reduction in the quantity of form work materials required for a project. These enhancements are achieved through the general improvement in safety offered by post-tensioning during the various formwork operations. The research demonstrates that there is generally a significant improvement in the factors of safety over those for conventionally reinforced slabs. This improvement in the factor of safety occurs at all stages of the multi-level formwork operation. The general improvement in the factors of safety with post-tensioned slabs allows for a shortening of the slab construction cycle time. Further, the low level of load redistribution that occurs during the stripping operations makes post-tensioned slabs ideally suited to reshoring procedures. Provided the overall number of interconnected levels remains unaltered, it is possible to increase the number of reshored levels while reducing the number of undisturbed shoring levels without altering the factors of safety, thereby, reducing the overall quantity of formwork and shoring materials.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.