385 resultados para Electrical engineering|Artificial intelligence
Resumo:
Modern power systems have become more complex due to the growth in load demand, the installation of Flexible AC Transmission Systems (FACTS) devices and the integration of new HVDC links into existing AC grids. On the other hand, the introduction of the deregulated and unbundled power market operational mechanism, together with present changes in generation sources including connections of large renewable energy generation with intermittent feature in nature, have further increased the complexity and uncertainty for power system operation and control. System operators and engineers have to confront a series of technical challenges from the operation of currently interconnected power systems. Among the many challenges, how to evaluate the steady state and dynamic behaviors of existing interconnected power systems effectively and accurately using more powerful computational analysis models and approaches becomes one of the key issues in power engineering. The traditional computing techniques have been widely used in various fields for power system analysis with varying degrees of success. The rapid development of computational intelligence, such as neural networks, fuzzy systems and evolutionary computation, provides tools and opportunities to solve the complex technical problems in power system planning, operation and control.
Resumo:
The aim of this project was to develop a general theory of stigmergy and a software design pattern to build collaborative websites. Stigmergy is a biological term used when describing some insect swarm-behaviour where 'food gathering' and 'nest building' activities demonstrate the emergence of self-organised societies achieved without an apparent management structure. The results of the project are an abstract model of stigmergy and a software design pattern for building Web 2.0 components exploiting this self-organizing phenomenon. A proof-of-concept implementation was also created demonstrating potential commercial viability for future website projects.
Resumo:
Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment.
Resumo:
It is traditional to initialise Kalman filters and extended Kalman filters with estimates of the states calculated directly from the observed (raw) noisy inputs, but unfortunately their performance is extremely sensitive to state initialisation accuracy: good initial state estimates ensure fast convergence whereas poor estimates may give rise to slow convergence or even filter divergence. Divergence is generally due to excessive observation noise and leads to error magnitudes that quickly become unbounded (R.J. Fitzgerald, 1971). When a filter diverges, it must be re initialised but because the observations are extremely poor, re initialised states will have poor estimates. The paper proposes that if neurofuzzy estimators produce more accurate state estimates than those calculated from the observed noisy inputs (using the known state model), then neurofuzzy estimates can be used to initialise the states of Kalman and extended Kalman filters. Filters whose states have been initialised with neurofuzzy estimates should give improved performance by way of faster convergence when the filter is initialised, and when a filter is re started after divergence
Resumo:
As a result of the more distributed nature of organisations and the inherently increasing complexity of their business processes, a significant effort is required for the specification and verification of those processes. The composition of the activities into a business process that accomplishes a specific organisational goal has primarily been a manual task. Automated planning is a branch of artificial intelligence (AI) in which activities are selected and organised by anticipating their expected outcomes with the aim of achieving some goal. As such, automated planning would seem to be a natural fit to the BPM domain to automate the specification of control flow. A number of attempts have been made to apply automated planning to the business process and service composition domain in different stages of the BPM lifecycle. However, a unified adoption of these techniques throughout the BPM lifecycle is missing. As such, we propose a new intention-centric BPM paradigm, which aims on minimising the specification effort by exploiting automated planning techniques to achieve a pre-stated goal. This paper provides a vision on the future possibilities of enhancing BPM using automated planning. A research agenda is presented, which provides an overview of the opportunities and challenges for the exploitation of automated planning in BPM.
Resumo:
With the extensive use of rating systems in the web, and their significance in decision making process by users, the need for more accurate aggregation methods has emerged. The Naïve aggregation method, using the simple mean, is not adequate anymore in providing accurate reputation scores for items [6 ], hence, several researches where conducted in order to provide more accurate alternative aggregation methods. Most of the current reputation models do not consider the distribution of ratings across the different possible ratings values. In this paper, we propose a novel reputation model, which generates more accurate reputation scores for items by deploying the normal distribution over ratings. Experiments show promising results for our proposed model over state-of-the-art ones on sparse and dense datasets.
Resumo:
This work examined a new method of detecting small water filled cracks in underground insulation ('water trees') using data from commecially available non-destructive testing equipment. A testing facility was constructed and a computer simulation of the insulation designed in order to test the proposed ageing factor - the degree of non-linearity. This was a large industry-backed project involving an ARC linkage grant, Ergon Energy and the University of Queensland, as well as the Queensland University of Technology.
Resumo:
In this paper we present a robust method to detect handwritten text from unconstrained drawings on normal whiteboards. Unlike printed text on documents, free form handwritten text has no pattern in terms of size, orientation and font and it is often mixed with other drawings such as lines and shapes. Unlike handwritings on paper, handwritings on a normal whiteboard cannot be scanned so the detection has to be based on photos. Our work traces straight edges on photos of the whiteboard and builds graph representation of connected components. We use geometric properties such as edge density, graph density, aspect ratio and neighborhood similarity to differentiate handwritten text from other drawings. The experiment results show that our method achieves satisfactory precision and recall. Furthermore, the method is robust and efficient enough to be deployed in a mobile device. This is an important enabler of business applications that support whiteboard-centric visual meetings in enterprise scenarios. © 2012 IEEE.
Resumo:
Non-monotonic reasoning typically deals with three kinds of knowledge. Facts are meant to describe immutable statements of the environment. Rules define relationships among elements. Lastly, an ordering among the rules, in the form of a superiority relation, establishes the relative strength of rules. To revise a non-monotonic theory, we can change either one of these three elements. We prove that the problem of revising a non-monotonic theory by only changing the superiority relation is a NP-complete problem.
Resumo:
Past research has suggested that social engineering poses the most significant security risk. Recent studies have suggested that social networking sites (SNSs) are the most common source of social engineering attacks. The risk of social engineering attacks in SNSs is associated with the difficulty of making accurate judgments regarding source credibility in the virtual environment of SNSs. In this paper, we quantitatively investigate source credibility dimensions in terms of social engineering on Facebook, as well as the source characteristics that influence Facebook users to judge an attacker as credible, therefore making them susceptible to victimization. Moreover, in order to predict users’ susceptibility to social engineering victimization based on their demographics, we investigate the effectiveness of source characteristics on different demographic groups by measuring the consent intentions and behavior responses of users to social engineering requests using a role-play experiment.