955 resultados para intelligent agents
Resumo:
At common law, a corporation may be liable vicariously for the conduct of its appointed agents, employees or directors. This generally requires the agent or employee to be acting in the course of his or her agency or employment and, in the case of representations, to have actual or implied authority to make the representations. The circumstances in which a corporation may be liable for the conduct of its agents, employees or directors is broadened under the Australian Consumer Law (ACL) to where one of these parties engages in conduct “on behalf of” the corporation. As the decision in Bennett v Elysium Noosa Pty Ltd (in liq) demonstrates, this may extend to liability for the misleading conduct of a salesperson for the joint venture to parties who are not formal members of the joint venture, but where the joint venture activities are within the course of the entity’s “business, affairs or activities”.
Resumo:
Sophisticated models of human social behaviour are fast becoming highly desirable in an increasingly complex and interrelated world. Here, we propose that rather than taking established theories from the physical sciences and naively mapping them into the social world, the advanced concepts and theories of social psychology should be taken as a starting point, and used to develop a new modelling methodology. In order to illustrate how such an approach might be carried out, we attempt to model the low elaboration attitude changes of a society of agents in an evolving social context. We propose a geometric model of an agent in context, where individual agent attitudes are seen to self-organise to form ideologies, which then serve to guide further agent-based attitude changes. A computational implementation of the model is shown to exhibit a number of interesting phenomena, including a tendency for a measure of the entropy in the system to decrease, and a potential for externally guiding a population of agents towards a new desired ideology.
Resumo:
The contextuality of changing attitudes makes them extremely difficult to model. This paper scales up Quantum Decision Theory (QDT) to a social setting, using it to model the manner in which social contexts can interact with the process of low elaboration attitude change. The elements of this extended theory are presented, along with a proof of concept computational implementation in a low dimensional subspace. This model suggests that a society's understanding of social issues will settle down into a static or frozen configuration unless that society consists of a range of individuals with varying personality types and norms.
Resumo:
Stereo-based visual odometry algorithms are heavily dependent on an accurate calibration of the rigidly fixed stereo pair. Even small shifts in the rigid transform between the cameras can impact on feature matching and 3D scene triangulation, adversely affecting pose estimates and applications dependent on long-term autonomy. In many field-based scenarios where vibration, knocks and pressure change affect a robotic vehicle, maintaining an accurate stereo calibration cannot be guaranteed over long periods. This paper presents a novel method of recalibrating overlapping stereo camera rigs from online visual data while simultaneously providing an up-to-date and up-to-scale pose estimate. The proposed technique implements a novel form of partitioned bundle adjustment that explicitly includes the homogeneous transform between a stereo camera pair to generate an optimal calibration. Pose estimates are computed in parallel to the calibration, providing online recalibration which seamlessly integrates into a stereo visual odometry framework. We present results demonstrating accurate performance of the algorithm on both simulated scenarios and real data gathered from a wide-baseline stereo pair on a ground vehicle traversing urban roads.
Resumo:
At a time when global consumption and production levels are 25 percent higher than the Earth’s sustainable carrying capacity, there are worldwide calls to find ways to sustain the Earth for this and future generations. A central premise of this study is that education systems have an obligation to participate in this move towards sustainability and can respond by embedding education for sustainability into curricula. This study took early childhood education as its focus due to the teacherresearcher’s own concerns about the state of the planet, coupled with early childhood education’s established traditions of nature-based and child-centred pedagogy. The study explored the experiences of a class of kindergarten children as they undertook a Project Approach to learning about environmental sustainability. The Project Approach is an adaptation of Chard’s work which is situated within a constructivist theoretical framework (Chard, 2011). The Project Approach involves in-depth investigations around an identified topic of interest. It has three phases: introductory, synthesising and culminating phase. The study also investigated the learning journey of the classroom teacher/researcher who broadened her long-held co-constructivist teaching approaches to include transformative practices in order to facilitate curriculum which embedded education for sustainability. While coconstructivist approaches focus on the co-construction of knowledge, transformative practices are concerned with creating change. An action research case study was conducted. This involved twenty-two children who attended an Australian kindergarten. Data were collected and analysed over a seven week period. The study found that young children can be change agents for sustainability when a Project Approach is broadened to include transformative practices. The study also found that the child participants were able to think critically about environmental and sustainability issues, were able to create change in their local contexts, and took on the role of educators to influence others’ environmental behaviours. Another finding was that the teacher-researcher’s participation in the study caused a transformation of both her teaching philosophy and the culture at the kindergarten. An important outcome of the study was the development of a new curriculum model that integrates and has applicability for curriculum development and teacher practice.
Resumo:
Teaching introductory programming has challenged educators through the years. Although Intelligent Tutoring Systems that teach programming have been developed to try to reduce the problem, none have been developed to teach web programming. This paper describes the design and evaluation of the PHP Intelligent Tutoring System (PHP ITS) which addresses this problem. The evaluation process showed that students who used the PHP ITS showed a significant improvement in test scores
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
Stereo visual odometry has received little investigation in high altitude applications due to the generally poor performance of rigid stereo rigs at extremely small baseline-to-depth ratios. Without additional sensing, metric scale is considered lost and odometry is seen as effective only for monocular perspectives. This paper presents a novel modification to stereo based visual odometry that allows accurate, metric pose estimation from high altitudes, even in the presence of poor calibration and without additional sensor inputs. By relaxing the (typically fixed) stereo transform during bundle adjustment and reducing the dependence on the fixed geometry for triangulation, metrically scaled visual odometry can be obtained in situations where high altitude and structural deformation from vibration would cause traditional algorithms to fail. This is achieved through the use of a novel constrained bundle adjustment routine and accurately scaled pose initializer. We present visual odometry results demonstrating the technique on a short-baseline stereo pair inside a fixed-wing UAV flying at significant height (~30-100m).
Resumo:
Achieving a robust, accurately scaled pose estimate in long-range stereo presents significant challenges. For large scene depths, triangulation from a single stereo pair is inadequate and noisy. Additionally, vibration and flexible rigs in airborne applications mean accurate calibrations are often compromised. This paper presents a technique for accurately initializing a long-range stereo VO algorithm at large scene depth, with accurate scale, without explicitly computing structure from rigidly fixed camera pairs. By performing a monocular pose estimate over a window of frames from a single camera, followed by adding the secondary camera frames in a modified bundle adjustment, an accurate, metrically scaled pose estimate can be found. To achieve this the scale of the stereo pair is included in the optimization as an additional parameter. Results are presented both on simulated and field gathered data from a fixed-wing UAV flying at significant altitude, where the epipolar geometry is inaccurate due to structural deformation and triangulation from a single pair is insufficient. Comparisons are made with more conventional VO techniques where the scale is not explicitly optimized, and demonstrated over repeated trials to indicate robustness.
Resumo:
Robotic systems are increasingly being utilised as fundamental data-gathering tools by scientists, allowing new perspectives and a greater understanding of the planet and its environmental processes. Today's robots are already exploring our deep oceans, tracking harmful algal blooms and pollution spread, monitoring climate variables, and even studying remote volcanoes. This article collates and discusses the significant advancements and applications of marine, terrestrial, and airborne robotic systems developed for environmental monitoring during the last two decades. Emerging research trends for achieving large-scale environmental monitoring are also reviewed, including cooperative robotic teams, robot and wireless sensor network (WSN) interaction, adaptive sampling and model-aided path planning. These trends offer efficient and precise measurement of environmental processes at unprecedented scales that will push the frontiers of robotic and natural sciences.
Resumo:
This thesis investigates the possibility of using an adaptive tutoring system for beginning programming students. The work involved, designing, developing and evaluating such a system and showing that it was effective in increasing the students’ test scores. In doing so, Artificial Intelligence techniques were used to analyse PHP programs written by students and to provide feedback based on any specific errors made by them. Methods were also included to provide students with the next best exercise to suit their particular level of knowledge.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
In this paper we describe cooperative control algorithms for robots and sensor nodes in an underwater environment. Cooperative navigation is defined as the ability of a coupled system of autonomous robots to pool their resources to achieve long-distance navigation and a larger controllability space. Other types of useful cooperation in underwater environments include: exchange of information such as data download and retasking; cooperative localization and tracking; and physical connection (docking) for tasks such as deployment of underwater sensor networks, collection of nodes and rescue of damaged robots. We present experimental results obtained with an underwater system that consists of two very different robots and a number of sensor network modules. We present the hardware and software architecture of this underwater system. We then describe various interactions between the robots and sensor nodes and between the two robots, including cooperative navigation. Finally, we describe our experiments with this underwater system and present data.
Resumo:
Railway Bridges deteriorate over time due to different critical factors including, flood, wind, earthquake, collision, and environment factors, such as corrosion, wear, termite attack, etc. In current practice, the contributions of the critical factors, towards the deterioration of railway bridges, which show their criticalities, are not appropriately taken into account. In this paper, a new method for quantifying the criticality of these factors will be introduced. The available knowledge as well as risk analyses conducted in different Australian standards and developed for bridge-design will be adopted. The analytic hierarchy process (AHP) is utilized for prioritising the factors. The method is used for synthetic rating of railway bridges developed by the authors of this paper. Enhancing the reliability of predicting the vulnerability of railway bridges to the critical factors, will be the significant achievement of this research.
Resumo:
Intravitreal injections of GABA antagonists, dopamine agonists and brief periods of normal vision have been shown separately to inhibit form-deprivation myopia (FDM). Our study had three aims: (i) establish whether GABAergic agents modify the myopia protective effect of normal vision, (ii) investigate the receptor sub-type specificity of any observed effect, and (iii) consider an interaction with the dopamine (DA) system. Prior to the period of normal vision GABAergic agents were applied either (i) individually, (ii) in combination with other GABAergic agents (an agonist with an antagonist), or (iii) in combination with DA agonists and antagonists. Water injections were given to groups not receiving drug treatments so that all experimental eyes received intravitreal injections. As shown previously, constant form-deprivation resulted in high myopia and when diffusers were removed for 2 h per day the period of normal vision greatly reduced the FDM that developed. GABA agonists inhibited the protective effect of normal vision whereas antagonists had the opposite effect. GABAA/C agonists and D2 DA antagonists when used in combination were additive in suppressing the protective effect of normal vision. A D2 DA agonist restored some of the protective effect of normal vision that was inhibited by a GABA agonist (muscimol). The protective effect of normal vision against form-deprivation is modifiable by both the GABAergic and DAergic pathways.