838 resultados para computer-based instrumentation
Resumo:
In this paper, we describe ongoing work on online banking customization with a particular focus on interaction. The scope of the study is confined to the Australian banking context where the lack of customization is evident. This paper puts forward the notion of using tags to facilitate personalized interactions in online banking. We argue that tags can afford simple and intuitive interactions unique to every individual in both online and mobile environments. Firstly, through a review of related literature, we frame our work in the customization domain. Secondly, we define a range of taggable resources in online banking. Thirdly, we describe our preliminary prototype implementation with respect to interaction customization types. Lastly, we conclude with a discussion on future work.
Resumo:
Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.
Resumo:
In dynamic and uncertain environments, where the needs of security and information availability are difficult to balance, an access control approach based on a static policy will be suboptimal regardless of how comprehensive it is. Risk-based approaches to access control attempt to address this problem by allocating a limited budget to users, through which they pay for the exceptions deemed necessary. So far the primary focus has been on how to incorporate the notion of budget into access control rather than what or if there is an optimal amount of budget to allocate to users. In this paper we discuss the problems that arise from a sub-optimal allocation of budget and introduce a generalised characterisation of an optimal budget allocation function that maximises organisations expected benefit in the presence of self-interested employees and costly audit.
Resumo:
The construction of timelines of computer activity is a part of many digital investigations. These timelines of events are composed of traces of historical activity drawn from system logs and potentially from evidence of events found in the computer file system. A potential problem with the use of such information is that some of it may be inconsistent and contradictory thus compromising its value. This work introduces a software tool (CAT Detect) for the detection of inconsistency within timelines of computer activity. We examine the impact of deliberate tampering through experiments conducted with our prototype software tool. Based on the results of these experiments, we discuss techniques which can be employed to deal with such temporal inconsistencies.
Resumo:
A series of polymers with a comb architecture were prepared where the poly(olefin sulfone) backbone was designed to be highly sensitive to extreme ultraviolet (EUV) radiation, while the well-defined poly(methyl methacrylate) (PMMA) arms were incorporated with the aim of increasing structural stability. It is hypothesized that upon EUV radiation rapid degradation of the polysulfone backbone will occur leaving behind the well-defined PMMA arms. The synthesized polymers were characterised and have had their performance as chain-scission EUV photoresists evaluated. It was found that all materials possess high sensitivity towards degradation by EUV radiation (E0 in the range 4–6 mJ cm−2). Selective degradation of the poly(1-pentene sulfone) backbone relative to the PMMA arms was demonstrated by mass spectrometry headspace analysis during EUV irradiation and by grazing-angle ATR-FTIR. EUV interference patterning has shown that materials are capable of resolving 30 nm 1:1 line:space features. The incorporation of PMMA was found to increase the structural integrity of the patterned features. Thus, it has been shown that terpolymer materials possessing a highly sensitive poly(olefin sulfone) backbone and PMMA arms are able to provide a tuneable materials platform for chain scission EUV resists. These materials have the potential to benefit applications that require nanopattering, such as computer chip manufacture and nano-MEMS.
Resumo:
Transmission smart grids will use a digital platform for the automation of high voltage substations. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. A time synchronisation system is required for a sampled value process bus, however the details are not defined in IEC 61850-9-2. PTPv2 provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. The suitability of PTPv2 to synchronise sampling in a digital process bus is evaluated, with preliminary results indicating that steady state performance of low cost clocks is an acceptable ±300 ns, but that corrections issued by grandmaster clocks can introduce significant transients. Extremely stable grandmaster oscillators are required to ensure any corrections are sufficiently small that time synchronising performance is not degraded.
Resumo:
In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi- Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles’ state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle’s state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle’s state for more than one minute, at real-time frame rates based, only on visual information.
Resumo:
To sustain an ongoing rapid growth of video information, there is an emerging demand for a sophisticated content-based video indexing system. However, current video indexing solutions are still immature and lack of any standard. This doctoral consists of a research work based on an integrated multi-modal approach for sports video indexing and retrieval. By combining specific features extractable from multiple audio-visual modalities, generic structure and specific events can be detected and classified. During browsing and retrieval, users will benefit from the integration of high-level semantic and some descriptive mid-level features such as whistle and close-up view of player(s).
Resumo:
With the emergence of Web 2.0, Web users can classify Web items of their interest by using tags. Tags reflect users’ understanding to the items collected in each tag. Exploring user tagging behavior provides a promising way to understand users’ information needs. However, free and relatively uncontrolled vocabulary has its drawback in terms of lack of standardization and semantic ambiguity. Moreover, the relationships among tags have not been explored even there exist rich relationships among tags which could provide valuable information for us to better understand users. In this paper, we propose a novel approach to construct tag ontology based on the widely used general ontology WordNet to capture the semantics and the structural relationships of tags. Ambiguity of tags is a challenging problem to deal with in order to construct high quality tag ontology. We propose strategies to find the semantic meanings of tags and a strategy to disambiguate the semantics of tags based on the opinion of WordNet lexicographers. In order to evaluate the usefulness of the constructed tag ontology, in this paper we apply the extracted tag ontology in a tag recommendation experiment. We believe this is the first application of tag ontology for recommendation making. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.
Resumo:
Computer vision is an attractive solution for uninhabited aerial vehicle (UAV) collision avoidance, due to the low weight, size and power requirements of hardware. A two-stage paradigm has emerged in the literature for detection and tracking of dim targets in images, comprising of spatial preprocessing, followed by temporal filtering. In this paper, we investigate a hidden Markov model (HMM) based temporal filtering approach. Specifically, we propose an adaptive HMM filter, in which the variance of model parameters is refined as the quality of the target estimate improves. Filters with high variance (fat filters) are used for target acquisition, and filters with low variance (thin filters) are used for target tracking. The adaptive filter is tested in simulation and with real data (video of a collision-course aircraft). Our test results demonstrate that our adaptive filtering approach has improved tracking performance, and provides an estimate of target heading not present in previous HMM filtering approaches.
Resumo:
The University of Queensland has recently established a new design-focused, studio-based computer science degree. The Bachelor of Information Environments degree augments the core courses from the University's standard CS degree with a stream of design courses and integrative studio-based projects undertaken every semester. The studio projects integrate and reinforce learning by requiring students to apply the knowledge and skills gained in other courses to open-ended real-world design projects. The studio model is based on the architectural studio and involves teamwork, collaborative learning, interactive problem solving, presentations and peer review. This paper describes the degree program, its curriculum and rationale, and reports on experiences in the first year of delivery.