948 resultados para Computer Engineering|Computer science
Resumo:
Virtual Reality (VR) techniques are increasingly being used in education about and in the treatment of certain types of mental illness. Research indicates VR is delivering on it's promised potential to provide enhanced training and treatment outcomes through incorporation of this high-end technology. Schizophrenia is a mental disorder affecting 1−2% of the population. A significant research project being undertaken at the University of Queensland has constructed virtual environments that reproduce the phenomena experienced by patients who have psychosis. The VR environment will allow behavioral exposure therapies to be conducted with exactly controlled exposure stimuli and an expected reduction in risk of harm. This paper reports on the work of the project, previous stages of software development and current and future educational and clinical applications of the Virtual Environments.
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
Resumo:
In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.
Resumo:
Having a reliable understanding about the behaviours, problems, and performance of existing processes is important in enabling a targeted process improvement initiative. Recently, there has been an increase in the application of innovative process mining techniques to facilitate evidence-based understanding about organizations' business processes. Nevertheless, the application of these techniques in the domain of finance in Australia is, at best, scarce. This paper details a 6-month case study on the application of process mining in one of the largest insurance companies in Australia. In particular, the challenges encountered, the lessons learned, and the results obtained from this case study are detailed. Through this case study, we not only validated existing `lessons learned' from other similar case studies, but also added new insights that can be beneficial to other practitioners in applying process mining in their respective fields.
Resumo:
New substation technology, such as non-conventional instrument transformers,and a need to reduce design and construction costs, are driving the adoption of Ethernet based digital process bus networks for high voltage substations. Protection and control applications can share a process bus, making more efficient use of the network infrastructure. This paper classifies and defines performance requirements for the protocols used in a process bus on the basis of application. These include GOOSE, SNMP and IEC 61850-9-2 sampled values. A method, based on the Multiple Spanning Tree Protocol (MSTP) and virtual local area networks, is presented that separates management and monitoring traffic from the rest of the process bus. A quantitative investigation of the interaction between various protocols used in a process bus is described. These tests also validate the effectiveness of the MSTP based traffic segregation method. While this paper focusses on a substation automation network, the results are applicable to other real-time industrial networks that implement multiple protocols. High volume sampled value data and time-critical circuit breaker tripping commands do not interact on a full duplex switched Ethernet network, even under very high network load conditions. This enables an efficient digital network to replace a large number of conventional analog connections between control rooms and high voltage switchyards.
Resumo:
This paper investigates the use of mel-frequency deltaphase (MFDP) features in comparison to, and in fusion with, traditional mel-frequency cepstral coefficient (MFCC) features within joint factor analysis (JFA) speaker verification. MFCC features, commonly used in speaker recognition systems, are derived purely from the magnitude spectrum, with the phase spectrum completely discarded. In this paper, we investigate if features derived from the phase spectrum can provide additional speaker discriminant information to the traditional MFCC approach in a JFA based speaker verification system. Results are presented which provide a comparison of MFCC-only, MFDPonly and score fusion of the two approaches within a JFA speaker verification approach. Based upon the results presented using the NIST 2008 Speaker Recognition Evaluation (SRE) dataset, we believe that, while MFDP features alone cannot compete with MFCC features, MFDP can provide complementary information that result in improved speaker verification performance when both approaches are combined in score fusion, particularly in the case of shorter utterances.
Resumo:
This paper presents a model for generating a MAC tag by injecting the input message directly into the internal state of a nonlinear filter generator. This model generalises a similar model for unkeyed hash functions proposed by Nakano et al. We develop a matrix representation for the accumulation phase of our model and use it to analyse the security of the model against man-in-the-middle forgery attacks based on collisions in the final register contents. The results of this analysis show that some conclusions of Nakano et al regarding the security of their model are incorrect. We also use our results to comment on several recent MAC proposals which can be considered as instances of our model and specify choices of options within the model which should prevent the type of forgery discussed here. In particular, suitable initialisation of the register and active use of a secure nonlinear filter will prevent an attacker from finding a collision in the final register contents which could result in a forged MAC.
Resumo:
An iterative based strategy is proposed for finding the optimal rating and location of fixed and switched capacitors in distribution networks. The substation Load Tap Changer tap is also set during this procedure. A Modified Discrete Particle Swarm Optimization is employed in the proposed strategy. The objective function is composed of the distribution line loss cost and the capacitors investment cost. The line loss is calculated using estimation of the load duration curve to multiple levels. The constraints are the bus voltage and the feeder current which should be maintained within their standard range. For validation of the proposed method, two case studies are tested. The first case study is the semi-urban 37-bus distribution system which is connected at bus 2 of the Roy Billinton Test System which is located in the secondary side of a 33/11 kV distribution substation. The second case is a 33 kV distribution network based on the modification of the 18-bus IEEE distribution system. The results are compared with prior publications to illustrate the accuracy of the proposed strategy.
Resumo:
To ensure the small-signal stability of a power system, power system stabilizers (PSSs) are extensively applied for damping low frequency power oscillations through modulating the excitation supplied to synchronous machines, and increasing interest has been focused on developing different PSS schemes to tackle the threat of damping oscillations to power system stability. This paper examines four different PSS models and investigates their performances on damping power system dynamics using both small-signal eigenvalue analysis and large-signal dynamic simulations. The four kinds of PSSs examined include the Conventional PSS (CPSS), Single Neuron based PSS (SNPSS), Adaptive PSS (APSS) and Multi-band PSS (MBPSS). A steep descent parameter optimization algorithm is employed to seek the optimal PSS design parameters. To evaluate the effects of these PSSs on improving power system dynamic behaviors, case studies are carried out on an 8-unit 24-bus power system through both small-signal eigenvalue analysis and large-signal time-domain simulations.
Resumo:
With the progressive exhaustion of fossil energy and the enhanced awareness of environmental protection, more attention is being paid to electric vehicles (EVs). Inappropriate siting and sizing of EV charging stations could have negative effects on the development of EVs, the layout of the city traffic network, and the convenience of EVs' drivers, and lead to an increase in network losses and a degradation in voltage profiles at some nodes. Given this background, the optimal sites of EV charging stations are first identified by a two-step screening method with environmental factors and service radius of EV charging stations considered. Then, a mathematical model for the optimal sizing of EV charging stations is developed with the minimization of total cost associated with EV charging stations to be planned as the objective function and solved by a modified primal-dual interior point algorithm (MPDIPA). Finally, simulation results of the IEEE 123-node test feeder have demonstrated that the developed model and method cannot only attain the reasonable planning scheme of EV charging stations, but also reduce the network loss and improve the voltage profile.
Resumo:
The development of an intelligent plug-in electric vehicle (PEV) network is an important research topic in the smart grid environment. An intelligent PEV network enables a flexible control of PEV charging and discharging activities and hence PEVs can be utilized as ancillary service providers in the power system concerned. Given this background, an intelligent PEV network architecture is first developed, and followed by detailed designs of its application layers, including the charging and discharging controlling system, mobility and roaming management, as well as communication mechanisms associated. The presented architecture leverages the philosophy in mobile communication network buildup
Resumo:
This paper proposes a new approach for state estimation of angles and frequencies of equivalent areas in large power systems with synchronized phasor measurement units. Defining coherent generators and their correspondent areas, generators are aggregated and system reduction is performed in each area of inter-connected power systems. The structure of the reduced system is obtained based on the characteristics of the reduced linear model and measurement data to form the non-linear model of the reduced system. Then a Kalman estimator is designed for the reduced system to provide an equivalent dynamic system state estimation using the synchronized phasor measurement data. The method is simulated on two test systems to evaluate the feasibility of the proposed method.
Resumo:
Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.
Resumo:
The act of computer programming is generally considered to be temporally removed from a computer program's execution. In this paper we discuss the idea of programming as an activity that takes place within the temporal bounds of a real-time computational process and its interactions with the physical world. We ground these ideas within the con- text of livecoding -- a live audiovisual performance practice. We then describe how the development of the programming environment "Impromptu" has addressed our ideas of programming with time and the notion of the programmer as an agent in a cyber-physical system.
Resumo:
Security indicators in web browsers alert users to the presence of a secure connection between their computer and a web server; many studies have shown that such indicators are largely ignored by users in general. In other areas of computer security, research has shown that technical expertise can decrease user susceptibility to attacks. In this work, we examine whether computer or security expertise affects use of web browser security indicators. Our study takes place in the context of web-based single sign-on, in which a user can use credentials from a single identity provider to login to many relying websites; single sign-on is a more complex, and hence more difficult, security task for users. In our study, we used eye trackers and surveyed participants to examine the cues individuals use and those they report using, respectively. Our results show that users with security expertise are more likely to self-report looking at security indicators, and eye-tracking data shows they have longer gaze duration at security indicators than those without security expertise. However, computer expertise alone is not correlated with recorded use of security indicators. In survey questions, neither experts nor novices demonstrate a good understanding of the security consequences of web-based single sign-on.