77 resultados para Data acquisition card
Resumo:
Supervisory Control and Data Acquisition (SCADA) systems are one of the key foundations of smart grids. The Distributed Network Protocol version 3 (DNP3) is a standard SCADA protocol designed to facilitate communications in substations and smart grid nodes. The protocol is embedded with a security mechanism called Secure Authentication (DNP3-SA). This mechanism ensures that end-to-end communication security is provided in substations. This paper presents a formal model for the behavioural analysis of DNP3-SA using Coloured Petri Nets (CPN). Our DNP3-SA CPN model is capable of testing and verifying various attack scenarios: modification, replay and spoofing, combined complex attack and mitigation strategies. Using the model has revealed a previously unidentified flaw in the DNP3-SA protocol that can be exploited by an attacker that has access to the network interconnecting DNP3 devices. An attacker can launch a successful attack on an outstation without possessing the pre-shared keys by replaying a previously authenticated command with arbitrary parameters. We propose an update to the DNP3-SA protocol that removes the flaw and prevents such attacks. The update is validated and verified using our CPN model proving the effectiveness of the model and importance of the formal protocol analysis.
Resumo:
The Australian Naturalistic Driving Study (ANDS), a ground-breaking study of Australian driver behaviour and performance, was officially launched on April 21st, 2015 at UNSW. The ANDS project will provide a realistic perspective on the causes of vehicle crashes and near miss crash events, along with the roles speeding, distraction and other factors have on such events. A total of 360 volunteer drivers across NSW and Victoria - 180 in NSW and 180 in Victoria - will be monitored by a Data Acquisition System (DAS) recording continuously for 4 months their driving behaviour using a suite of cameras and sensors. Participants’ driving behaviour (e.g. gaze), the behaviour of their vehicle (e.g. speed, lane position) and the behaviour of other road users with whom they interact in normal and safety-critical situations will be recorded. Planning of the ANDS commenced over two years ago in June 2013 when the Multi-Institutional Agreement for a grant supporting the equipment purchase and assembly phase was signed by parties involved in this large scale $4 million study (5 university accident research centres, 3 government regulators, 2 third party insurers and 2 industry partners). The program’s second development phase commenced a year later in June 2014 after a second grant was awarded. This paper presents an insider's view into that two year process leading up to the launch, and outlines issues that arose in the set-up phase of the study and how these were addressed. This information will be useful to other organisations considering setting up an NDS.
Resumo:
This case-study examines innovative experimentation with mobile and cloud-based technologies, utilising “Guerrilla Research Tactics” (GRT), as a means of covertly retrieving data from the urban fabric. Originally triggered by participatory action research (Kindon et al., 2008) and unobtrusive research methods (Kellehear, 1993), the potential for GRT lies in its innate ability to offer researchers an alternative, creative approach to data acquisition, whilst simultaneously allowing them to engage with the public, who are active co-creators of knowledge. Key characteristics are political agenda, the unexpected and the unconventional, which allow for an interactive, unique and thought-provoking experience for both researcher and participant.
Resumo:
Real-time locating systems (RTLSs) are considered an effective way to identify and track the location of an object in both indoor and outdoor environments. Various RTLSs have been developed and made commercially available in recent years. Research into RTLSs in the construction sector is ubiquitous and results have been published in many construction-related academic journals over the past decade. A succinct and systematic review of current applications would help academics, researchers and industry practitioners in identifying existing research deficiencies and therefore future research directions. However, such a review is lacking to date. This paper provides a framework for understanding RTLS research and development in the construction literature over the last decade. The research opportunities and directions of construction RTLS are highlighted. Background information relating to construction RTLS trends, accuracy, deployment, cost, purposes, advantages and limitations is provided. Four major research gaps are identified and research opportunities and directions are highlighted.
Resumo:
This thesis evaluates the security of Supervisory Control and Data Acquisition (SCADA) systems, which are one of the key foundations of many critical infrastructures. Specifically, it examines one of the standardised SCADA protocols called the Distributed Network Protocol Version 3, which attempts to provide a security mechanism to ensure that messages transmitted between devices, are adequately secured from rogue applications. To achieve this, the thesis applies formal methods from theoretical computer science to formally analyse the correctness of the protocol.
Resumo:
Transit passenger market segmentation enables transit operators to target different classes of transit users to provide customized information and services. The Smart Card (SC) data, from Automated Fare Collection system, facilitates the understanding of multiday travel regularity of transit passengers, and can be used to segment them into identifiable classes of similar behaviors and needs. However, the use of SC data for market segmentation has attracted very limited attention in the literature. This paper proposes a novel methodology for mining spatial and temporal travel regularity from each individual passenger’s historical SC transactions and segments them into four segments of transit users. After reconstructing the travel itineraries from historical SC transactions, the paper adopts the Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm to mine travel regularity of each SC user. The travel regularity is then used to segment SC users by an a priori market segmentation approach. The methodology proposed in this paper assists transit operators to understand their passengers and provide them oriented information and services.
Resumo:
Transit passenger market segmentation enables transit operators to target different classes of transit users for targeted surveys and various operational and strategic planning improvements. However, the existing market segmentation studies in the literature have been generally done using passenger surveys, which have various limitations. The smart card (SC) data from an automated fare collection system facilitate the understanding of the multiday travel pattern of transit passengers and can be used to segment them into identifiable types of similar behaviors and needs. This paper proposes a comprehensive methodology for passenger segmentation solely using SC data. After reconstructing the travel itineraries from SC transactions, this paper adopts the density-based spatial clustering of application with noise (DBSCAN) algorithm to mine the travel pattern of each SC user. An a priori market segmentation approach then segments transit passengers into four identifiable types. The methodology proposed in this paper assists transit operators to understand their passengers and provides them oriented information and services.
Resumo:
Smart Card Automated Fare Collection (AFC) data has been extensively exploited to understand passenger behavior, passenger segment, trip purpose and improve transit planning through spatial travel pattern analysis. The literature has been evolving from simple to more sophisticated methods such as from aggregated to individual travel pattern analysis, and from stop-to-stop to flexible stop aggregation. However, the issue of high computing complexity has limited these methods in practical applications. This paper proposes a new algorithm named Weighted Stop Density Based Scanning Algorithm with Noise (WS-DBSCAN) based on the classical Density Based Scanning Algorithm with Noise (DBSCAN) algorithm to detect and update the daily changes in travel pattern. WS-DBSCAN converts the classical quadratic computation complexity DBSCAN to a problem of sub-quadratic complexity. The numerical experiment using the real AFC data in South East Queensland, Australia shows that the algorithm costs only 0.45% in computation time compared to the classical DBSCAN, but provides the same clustering results.
Resumo:
Designers need to develop good observational skills in order to conduct user studies that reveal the subtleties of human interactions and adequately inform design activity. In this paper we describe a game format that we have used in concert with wiki-web technology, to engage our IT and Information Environments students in developing much sharper observational skills. The Video Card Game is a method of video analysis that is suited to design practitioners as well as to researchers. It uses the familiar format of a card game similar to "Happy Families,, to help students develop themes of interactions from watching video clips. Students then post their interaction themes on wiki-web pages, which allows the teaching team and other students to edit and comment on them. We found that the tangible (cards), game, role playing and sharing aspects of this method led to a much larger amount of interaction and discussion between student groups and between students and the teaching team, than we have achieved using our traditional teaching methods, while taking no more time on the part of the teaching staff. The quality of the resulting interaction themes indicates that this method fosters development of observational skills.In the paper we describe the motivations, method and results in full. We also describe the research context in which we collected the videotape data, and how this method relates to state of the art research methods in interaction design for ubiquitous computing technology.
Resumo:
In daily activities people are using a number of available means for the achievement of balance, such as the use of hands and the co-ordination of balance. One of the approaches that explains this relationship between perception and action is the ecological theory that is based on the work of a) Bernstein (1967), who imposed the problem of ‘the degrees of freedom’, b) Gibson (1979), who referred to the theory of perception and the way which the information is received from the environment in order for a certain movement to be achieved, c) Newell (1986), who proposed that movement can derive from the interaction of the constraints that imposed from the environment and the organism and d) Kugler, Kelso and Turvey (1982), who showed the way which “the degrees of freedom” are connected and interact. According to the above mentioned theories, the development of movement co-ordination can result from the different constraints that imposed into the organism-environment system. The close relation between the environmental and organismic constraints, as well as their interaction is responsible for the movement system that will be activated. These constraints apart from shaping the co-ordination of specific movements can be a rate limiting factor, to a certain degree, in the acquisition and mastering of a new skill. This frame of work can be an essential tool for the study of catching an object (e.g., a ball). The importance of this study becomes obvious due to the fact that movements that involved in catching an object are representative of every day actions and characteristic of the interaction between perception and action.
Resumo:
Aim: To investigate workplace cultures in the acquisition of computer usage skills by mature age workers. Methods: Data were gathered through focus groups conducted at job network centres in the Greater Brisbane metropolitan region. Participants who took part were a mixture of workers and job-seekers. Results: The results suggest that mature age workers can be exposed to inappropriate computer training practices and age-insensitive attitudes towards those with low base computer skills. Conclusions: There is a need for managers to be observant of ageist attitudes in the work place and to develop age-sensitive strategies to help mature age workers learn computer usage skills. Mature age workers also need to develop skills in ways which are practical and meaningful to their work.
Resumo:
Forensic analysis requires the acquisition and management of many different types of evidence, including individual disk drives, RAID sets, network packets, memory images, and extracted files. Often the same evidence is reviewed by several different tools or examiners in different locations. We propose a backwards-compatible redesign of the Advanced Forensic Formatdan open, extensible file format for storing and sharing of evidence, arbitrary case related information and analysis results among different tools. The new specification, termed AFF4, is designed to be simple to implement, built upon the well supported ZIP file format specification. Furthermore, the AFF4 implementation has downward comparability with existing AFF files.
Resumo:
Traditionally, the aquisition of skills and sport movement has been characterised by numerous repetitions of presumed model movement pattern to be acquired by learners. This approach has been questioned by research identifying the presence of individualised movement patterns and the low probability of occurrence of two identical movements within and between individuals. In contrast, the differential learning approach claims advantage for incurring variability in the learning process by adding stochastic perturbations during practice. These ideas are exemplified by data from a high jump experiment which compared the effectiveness of classical and a differential training approach with pre-post test design. Results showed clear advantages for the group with additional stochastic perturbation during the aquisition phase in comparison to classically trained athletes. Analogies to similar phenomenological effects in the neurobiological literature are discussed.
Resumo:
Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.
Resumo:
Data breach notification laws require organisations to notify affected persons or regulatory authorities when an unauthorised acquisition of personal data occurs. Most laws provide a safe harbour to this obligation if acquired data has been encrypted. There are three types of safe harbour: an exemption; a rebuttable presumption and factor-based analysis. We demonstrate, using three condition-based scenarios, that the broad formulation of most encryption safe harbours is based on the flawed assumption that encryption is the silver bullet for personal information protection. We then contend that reliance upon an encryption safe harbour should be dependent upon a rigorous and competent risk-based review that is required on a case-by-case basis. Finally, we recommend the use of both an encryption safe harbour and a notification trigger as our preferred choice for a data breach notification regulatory framework.