963 resultados para computer application
Resumo:
Unified Enterprise application security is a new emerging approach for providing protection against application level attacks. Conventional application security approach that consists of embedding security into each critical application leads towards scattered security mechanism that is not only difficult to manage but also creates security loopholes. According to the CSIIFBI computer crime survey report, almost 80% of the security breaches come from authorized users. In this paper, we have worked on the concept of unified security model, which manages all security aspect from a single security window. The basic idea is to keep business functionality separate from security components of the application. Our main focus was on the designing of frame work for unified layer which supports single point of policy control, centralize logging mechanism, granular, context aware access control, and independent from any underlying authentication technology and authorization policy.
Resumo:
In this paper, we develop the switching controller presented by Lee et al. for the pose control of a car-like vehicle, to allow the use of an omnidirectional vision sensor. To this end we incorporate an extension to a hypothesis on the navigation behaviour of the desert ant, cataglyphis bicolor, which leads to a correspondence free landmark based vision technique. The method we present allows positioning to a learnt location based on feature bearing angle and range discrepancies between the robot's current view of the environment, and that at a learnt location. We present simulations and experimental results, the latter obtained using our outdoor mobile platform.
Resumo:
The objective of this paper is to provide an overview of mine automation applications, developed at the Queensland Centre for Advanced Technology (QCAT), which make use of IEEE 802.11b wireless local area networks (WLANs). The paper has been prepared for a 2002 conference entitled "Creating the Virtual Enterprise - Leveraging wireless technology within existing business models for corporate advantage". Descriptions of the WLAN components have been omitted here as such details are presented in the accompanying papers. The structure of the paper is as follows. Application overviews are provided in Sections 2 to 7. Some pertinent strengths and weaknesses are summarised in Section 8. Please refer to http://www.mining-automation.com/ or contact the authors for further information.
Resumo:
A teaching and learning development project is currently under way at Queens-land University of Technology to develop advanced technology videotapes for use with the delivery of structural engineering courses. These tapes consist of integrated computer and laboratory simulations of important concepts, and behaviour of structures and their components for a number of structural engineering subjects. They will be used as part of the regular lectures and thus will not only improve the quality of lectures and learning environment, but also will be able to replace the ever-dwindling laboratory teaching in these subjects. The use of these videotapes, developed using advanced computer graphics, data visualization and video technologies, will enrich the learning process of the current diverse engineering student body. This paper presents the details of this new method, the methodology used, the results and evaluation in relation to one of the structural engineering subjects, steel structures.
Resumo:
Computer profiling is the automated forensic examination of a computer system in order to provide a human investigator with a characterisation of the activities that have taken place on that system. As part of this process, the logical components of the computer system – components such as users, files and applications - are enumerated and the relationships between them discovered and reported. This information is enriched with traces of historical activity drawn from system logs and from evidence of events found in the computer file system. A potential problem with the use of such information is that some of it may be inconsistent and contradictory thus compromising its value. This work examines the impact of temporal inconsistency in such information and discusses two types of temporal inconsistency that may arise – inconsistency arising out of the normal errant behaviour of a computer system, and inconsistency arising out of deliberate tampering by a suspect – and techniques for dealing with inconsistencies of the latter kind. We examine the impact of deliberate tampering through experiments conducted with prototype computer profiling software. Based on the results of these experiments, we discuss techniques which can be employed in computer profiling to deal with such temporal inconsistencies.
Resumo:
Endoscopic approaches for anterior correction of idiopathic scoliosis are a relatively new surgical technique. This paper describes the development of patient-specific finite element modelling techniques to investigate the biomechanics of single rod anterior scoliosis correction. Spinal geometry is obtained from pre-operative CT scans and material properties for osteo-ligamentous spinal tissues are based on existing literature. The techniques being developed will allow pre-surgical prediction of stresses, forces and deformations in spinal tissues, rods and screws under post-operative physiological loads.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Whilst a variety of studies has appeared over the last decade addressing the gap between the potential promised by computers and the reality experienced in the classroom by teachers and students, few have specifically addressed the situation as it pertains to the visual arts classroom. The aim of this study was to explore the reality of the classroom use of computers for three visual arts highschool teachers and determine how computer technology might enrich visual arts teaching and learning. An action research approach was employed to enable the researcher to understand the situation from the teachers' points of view while contributing to their professional practice. The wider social context surrounding this study is characterised by an increase in visual communications brought about by rapid advances in computer technology. The powerful combination of visual imagery and computer technology is illustrated by continuing developments in the print, film and television industries. In particular, the recent growth of interactive multimedia epitomises this combination and is significant to this study as it represents a new form of publishing of great interest to educators and artists alike. In this social context, visual arts education has a significant role to play. By cultivating a critical awareness of the implications of technology use and promoting a creative approach to the application of computer technology within the visual arts, visual arts education is in a position to provide an essential service to students who will leave high school to participate in a visual information age as both consumers and producers.