599 resultados para servers
Resumo:
The IEC 61850 family of standards for substation communication systems were released in the early 2000s, and include IEC 61850-8-1 and IEC 61850-9-2 that enable Ethernet to be used for process-level connections between transmission substation switchyards and control rooms. This paper presents an investigation of process bus protection performance, as the in-service behavior of multi-function process buses is largely unknown. An experimental approach was adopted that used a Real Time Digital Simulator and 'live' substation automation devices. The effect of sampling synchronization error and network traffic on transformer differential protection performance was assessed and compared to conventional hard-wired connections. Ethernet was used for all sampled value measurements, circuit breaker tripping, transformer tap-changer position reports and Precision Time Protocol synchronization of sampled value merging unit sampling. Test results showed that the protection relay under investigation operated correctly with process bus network traffic approaching 100% capacity. The protection system was not adversely affected by synchronizing errors significantly larger than the standards permit, suggesting these requirements may be overly conservative. This 'closed loop' approach, using substation automation hardware, validated the operation of protection relays under extreme conditions. Digital connections using a single shared Ethernet network outperformed conventional hard-wired solutions.
Resumo:
Today, the majority of semiconductor fabrication plants (fabs) conduct equipment preventive maintenance based on statistically-derived time- or wafer-count-based intervals. While these practices have had relative success in managing equipment availability and product yield, the cost, both in time and materials, remains high. Condition-based maintenance has been successfully adopted in several industries, where costs associated with equipment downtime range from potential loss of life to unacceptable affects to companies’ bottom lines. In this paper, we present a method for the monitoring of complex systems in the presence of multiple operating regimes. In addition, the new representation of degradation processes will be used to define an optimization procedure that facilitates concurrent maintenance and operational decision-making in a manufacturing system. This decision-making procedure metaheuristically maximizes a customizable cost function that reflects the benefits of production uptime, and the losses incurred due to deficient quality and downtime. The new degradation monitoring method is illustrated through the monitoring of a deposition tool operating over a prolonged period of time in a major fab, while the operational decision-making is demonstrated using simulated operation of a generic cluster tool.
Resumo:
This paper discusses findings made during a study of energy use feedback in the home (eco-feedback), well after the novelty has worn off. Contributing towards four important knowledge gaps in the research, we explore eco-feedback over longer time scales, focusing on instances where the feedback was not of lasting benefit to users rather than when it was. Drawing from 23 semi-structured interviews with Australian householders, we found that an initially high level of engagement gave way over time to disinterest, neglect and in certain cases, technical malfunction. Additionally, preconceptions concerned with the “purpose” of the feedback were found to affect use. We propose expanding the scope of enquiry for eco-feedback in several ways, and describe how eco-feedback that better supports decision-making in the “maintenance phase”, i.e. once the initial novelty has worn off, may be key to longer term engagement.
Resumo:
With the implementation of the Personally Controlled eHealth Records system (PCEHR) in Australia, shared Electronic Health Records (EHR) are now a reality. However, the characteristic implicit in the PCEHR that puts the consumer (i.e. patient) in control of managing his or her health information within the PCEHR prevents healthcare professionals (HCPs) from utilising it as a one-stop-shop for information at point of care decision making as they cannot trust that a complete record of the consumer's health history is available to them through it. As a result, whilst reaching a major milestone in Australia's eHealth journey, the PCEHR does not reap the full benefits that such a shared EHR system can offer.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Business Process Management (BPM) is accepted globally as an organizational approach to enhance productivity and drive cost efficiencies. Studies confirm a shortage of BPM skilled professionals with limited opportunities to develop the required BPM expertise. This study investigates this gap starting from a critical analysis of BPM courses offered by Australian universities and training institutions. These courses were analyzed and mapped against a leading BPM capability framework to determine how well current BPM education and training offerings in Australia address the core capabilities required by BPM professionals globally. To determine the BPM skill-sets sought by industry, online recruitment advertisements were collated, analyzed, and mapped against this BPM capability framework. The outcomes provide a detailed overview on the alignment of available BPM education/training and industry demand. These insights are useful for BPM professionals and their employers to build awareness of the BPM capabilities required for a BPM mature organization. Universities and other training institutions will benefit from these results by understanding where demand is, where the gaps are, and what other BPM education providers are supplying. This structured comparison method could continue to provide a common ground for future discussion across university-industry boundaries and continuous alignment of their respective practices.
Resumo:
As support grows for greater access to information and data held by governments, so does awareness of the need for appropriate policy, technical and legal frameworks to achieve the desired economic and societal outcomes. Since the late 2000s numerous international organizations, inter-governmental bodies and governments have issued open government data policies, which set out key principles underpinning access to, and the release and reuse of data. These policies reiterate the value of government data and establish the default position that it should be openly accessible to the public under transparent and non-discriminatory conditions, which are conducive to innovative reuse of the data. A key principle stated in open government data policies is that legal rights in government information must be exercised in a manner that is consistent with and supports the open accessibility and reusability of the data. In particular, where government information and data is protected by copyright, access should be provided under licensing terms which clearly permit its reuse and dissemination. This principle has been further developed in the policies issued by Australian Governments into a specific requirement that Government agencies are to apply the Creative Commons Attribution licence (CC BY) as the default licensing position when releasing government information and data. A wide-ranging survey of the practices of Australian Government agencies in managing their information and data, commissioned by the Office of the Australian Information Commissioner in 2012, provides valuable insights into progress towards the achievement of open government policy objectives and the adoption of open licensing practices. The survey results indicate that Australian Government agencies are embracing open access and a proactive disclosure culture and that open licensing under Creative Commons licences is increasingly prevalent. However, the finding that ‘[t]he default position of open access licensing is not clearly or robustly stated, nor properly reflected in the practice of Government agencies’ points to the need to further develop the policy framework and the principles governing information access and reuse, and to provide practical guidance tools on open licensing if the broadest range of government information and data is to be made available for innovative reuse.
Resumo:
A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path's end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.
Resumo:
In this paper we present a method for autonomously tuning the threshold between learning and recognizing a place in the world, based on both how the rodent brain is thought to process and calibrate multisensory data and the pivoting movement behaviour that rodents perform in doing so. The approach makes no assumptions about the number and type of sensors, the robot platform, or the environment, relying only on the ability of a robot to perform two revolutions on the spot. In addition, it self-assesses the quality of the tuning process in order to identify situations in which tuning may have failed. We demonstrate the autonomous movement-driven threshold tuning on a Pioneer 3DX robot in eight locations spread over an office environment and a building car park, and then evaluate the mapping capability of the system on journeys through these environments. The system is able to pick a place recognition threshold that enables successful environment mapping in six of the eight locations while also autonomously flagging the tuning failure in the remaining two locations. We discuss how the method, in combination with parallel work on autonomous weighting of individual sensors, moves the parameter dependent RatSLAM system significantly closer to sensor, platform and environment agnostic operation.
Resumo:
In outdoor environments shadows are common. These typically strong visual features cause considerable change in the appearance of a place, and therefore confound vision-based localisation approaches. In this paper we describe how to convert a colour image of the scene to a greyscale invariant image where pixel values are a function of underlying material property not lighting. We summarise the theory of shadow invariant images and discuss the modelling and calibration issues which are important for non-ideal off-the-shelf colour cameras. We evaluate the technique with a commonly used robotic camera and an autonomous car operating in an outdoor environment, and show that it can outperform the use of ordinary greyscale images for the task of visual localisation.
Resumo:
This paper describes the theory and practice for a stable haptic teleoperation of a flying vehicle. It extends passivity-based control framework for haptic teleoperation of aerial vehicles in the longest intercontinental setting that presents great challenges. The practicality of the control architecture has been shown in maneuvering and obstacle-avoidance tasks over the internet with the presence of significant time-varying delays and packet losses. Experimental results are presented for teleoperation of a slave quadrotor in Australia from a master station in the Netherlands. The results show that the remote operator is able to safely maneuver the flying vehicle through a structure using haptic feedback of the state of the slave and the perceived obstacles.
Resumo:
This paper explores the use of subarrays as array elements. Benefits of such a concept include improved gain in any direction without significantly increasing the overall size of the array and enhanced pattern control. The architecture for an array of subarrays will be discussed via a systems approach. Individual system designs are explored in further details and proof of principle is illustrated through a manufactured examples.
Resumo:
The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.
Resumo:
The ability to measure surface temperature and represent it on a metrically accurate 3D model has proven applications in many areas such as medical imaging, building energy auditing, and search and rescue. A system is proposed that enables this task to be performed with a handheld sensor, and for the first time with results able to be visualized and analyzed in real-time. A device comprising a thermal-infrared camera and range sensor is calibrated geometrically and used for data capture. The device is localized using a combination of ICP and video-based pose estimation from the thermal-infrared video footage which is shown to reduce the occurrence of failure modes. Furthermore, the problem of misregistration which can introduce severe distortions in assigned surface temperatures is avoided through the use of a risk-averse neighborhood weighting mechanism. Results demonstrate that the system is more stable and accurate than previous approaches, and can be used to accurately model complex objects and environments for practical tasks.
Resumo:
Incorporating a learner’s level of cognitive processing into Learning Analytics presents opportunities for obtaining rich data on the learning process. We propose a framework called COPA that provides a basis for mapping levels of cognitive operation into a learning analytics system. We utilise Bloom’s taxonomy, a theoretically respected conceptualisation of cognitive processing, and apply it in a flexible structure that can be implemented incrementally and with varying degree of complexity within an educational organisation. We outline how the framework is applied, and its key benefits and limitations. Finally, we apply COPA to a University undergraduate unit, and demonstrate its utility in identifying key missing elements in the structure of the course.