287 resultados para Heterogeneous UAVs
Resumo:
In this report an artificial neural network (ANN) based automated emergency landing site selection system for unmanned aerial vehicle (UAV) and general aviation (GA) is described. The system aims increase safety of UAV operation by emulating pilot decision making in emergency landing scenarios using an ANN to select a safe landing site from available candidates. The strength of an ANN to model complex input relationships makes it a perfect system to handle the multicriteria decision making (MCDM) process of emergency landing site selection. The ANN operates by identifying the more favorable of two landing sites when provided with an input vector derived from both landing site's parameters, the aircraft's current state and wind measurements. The system consists of a feed forward ANN, a pre-processor class which produces ANN input vectors and a class in charge of creating a ranking of landing site candidates using the ANN. The system was successfully implemented in C++ using the FANN C++ library and ROS. Results obtained from ANN training and simulations using randomly generated landing sites by a site detection simulator data verify the feasibility of an ANN based automated emergency landing site selection system.
Resumo:
Tridiagonal diagonally dominant linear systems arise in many scientific and engineering applications. The standard Thomas algorithm for solving such systems is inherently serial forming a bottleneck in computation. Algorithms such as cyclic reduction and SPIKE reduce a single large tridiagonal system into multiple small independent systems which can be solved in parallel. We have developed portable cyclic reduction and SPIKE algorithm OpenCL implementations with the intent to target a range of co-processors in a heterogeneous computing environment including Field Programmable Gate Arrays (FPGAs), Graphics Processing Units (GPUs) and other multi-core processors. In this paper, we evaluate these designs in the context of solver performance, resource efficiency and numerical accuracy.
Resumo:
There is an increased interest on the use of Unmanned Aerial Vehicles (UAVs) for wildlife and feral animal monitoring around the world. This paper describes a novel system which uses a predictive dynamic application that places the UAV ahead of a user, with a low cost thermal camera, a small onboard computer that identifies heat signatures of a target animal from a predetermined altitude and transmits that target’s GPS coordinates. A map is generated and various data sets and graphs are displayed using a GUI designed for easy use. The paper describes the hardware and software architecture and the probabilistic model for downward facing camera for the detection of an animal. Behavioral dynamics of target movement for the design of a Kalman filter and Markov model based prediction algorithm are used to place the UAV ahead of the user. Geometrical concepts and Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of the user, thus delivering a new way point for autonomous navigation. Results show that the system is capable of autonomously locating animals from a predetermined height and generate a map showing the location of the animals ahead of the user.
Resumo:
- Provided a practical variable-stepsize implementation of the exponential Euler method (EEM). - Introduced a new second-order variant of the scheme that enables the local error to be estimated at the cost of a single additional function evaluation. - New EEM implementation outperformed sophisticated implementations of the backward differentiation formulae (BDF) of order 2 and was competitive with BDF of order 5 for moderate to high tolerances.
Resumo:
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
Resumo:
The use of UAVs for remote sensing tasks; e.g. agriculture, search and rescue is increasing. The ability for UAVs to autonomously find a target and perform on-board decision making, such as descending to a new altitude or landing next to a target is a desired capability. Computer-vision functionality allows the Unmanned Aerial Vehicle (UAV) to follow a designated flight plan, detect an object of interest, and change its planned path. In this paper we describe a low cost and an open source system where all image processing is achieved on-board the UAV using a Raspberry Pi 2 microprocessor interfaced with a camera. The Raspberry Pi and the autopilot are physically connected through serial and communicate via MAVProxy. The Raspberry Pi continuously monitors the flight path in real time through USB camera module. The algorithm checks whether the target is captured or not. If the target is detected, the position of the object in frame is represented in Cartesian coordinates and converted into estimate GPS coordinates. In parallel, the autopilot receives the target location approximate GPS and makes a decision to guide the UAV to a new location. This system also has potential uses in the field of Precision Agriculture, plant pest detection and disease outbreaks which cause detrimental financial damage to crop yields if not detected early on. Results show the algorithm is accurate to detect 99% of object of interest and the UAV is capable of navigation and doing on-board decision making.
Resumo:
Stochastic volatility models are of fundamental importance to the pricing of derivatives. One of the most commonly used models of stochastic volatility is the Heston Model in which the price and volatility of an asset evolve as a pair of coupled stochastic differential equations. The computation of asset prices and volatilities involves the simulation of many sample trajectories with conditioning. The problem is treated using the method of particle filtering. While the simulation of a shower of particles is computationally expensive, each particle behaves independently making such simulations ideal for massively parallel heterogeneous computing platforms. In this paper, we present our portable Opencl implementation of the Heston model and discuss its performance and efficiency characteristics on a range of architectures including Intel cpus, Nvidia gpus, and Intel Many-Integrated-Core (mic) accelerators.
Resumo:
Solving large-scale all-to-all comparison problems using distributed computing is increasingly significant for various applications. Previous efforts to implement distributed all-to-all comparison frameworks have treated the two phases of data distribution and comparison task scheduling separately. This leads to high storage demands as well as poor data locality for the comparison tasks, thus creating a need to redistribute the data at runtime. Furthermore, most previous methods have been developed for homogeneous computing environments, so their overall performance is degraded even further when they are used in heterogeneous distributed systems. To tackle these challenges, this paper presents a data-aware task scheduling approach for solving all-to-all comparison problems in heterogeneous distributed systems. The approach formulates the requirements for data distribution and comparison task scheduling simultaneously as a constrained optimization problem. Then, metaheuristic data pre-scheduling and dynamic task scheduling strategies are developed along with an algorithmic implementation to solve the problem. The approach provides perfect data locality for all comparison tasks, avoiding rearrangement of data at runtime. It achieves load balancing among heterogeneous computing nodes, thus enhancing the overall computation time. It also reduces data storage requirements across the network. The effectiveness of the approach is demonstrated through experimental studies.
Resumo:
Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in both men and women. Research into causes, prevention and treatment of lung cancer is ongoing and much progress has been made recently in these areas, however survival rates have not significantly improved. Therefore, it is essential to develop biomarkers for early diagnosis of lung cancer, prediction of metastasis and evaluation of treatment efficiency, as well as using these molecules to provide some understanding about tumour biology and translate highly promising findings in basic science research to clinical application. In this investigation, two-dimensional difference gel electrophoresis and mass spectrometry were initially used to analyse conditioned media from a panel of lung cancer and normal bronchial epithelial cell lines. Significant proteins were identified with heterogeneous nuclear ribonucleoprotein A2B1 (hnRNPA2B1), pyruvate kinase M2 isoform (PKM2), Hsc-70 interacting protein and lactate dehydrogenase A (LDHA) selected for analysis in serum from healthy individuals and lung cancer patients. hnRNPA2B1, PKM2 and LDHA were found to be statistically significant in all comparisons. Tissue analysis and knockdown of hnRNPA2B1 using siRNA subsequently demonstrated both the overexpression and potential role for this molecule in lung tumorigenesis. The data presented highlights a number of in vitro derived candidate biomarkers subsequently verified in patient samples and also provides some insight into their roles in the complex intracellular mechanisms associated with tumour progression.
Resumo:
The following paper considers the question, where to office property? In doing so, it focuses, in the first instance, on identifying and describing a selection of key forces for change present within the contemporary operating environment in which office property functions. Given the increasingly complex, dynamic and multi-faceted character of this environment, the paper seeks to identify only the primary forces for change, within the context of the future of office property. These core drivers of change have, for the purposes of this discussion, been characterised as including a range of economic, demographic and socio-cultural factors, together with developments in information and communication technology. Having established this foundation, the paper proceeds to consider the manner in which these forces may, in the future, be manifested within the office property market. Comment is offered regarding the potential future implications of these forces for change together with their likely influence on the nature and management of the physical asset itself. Whilst no explicit time horizon has been envisioned in the preparation of this paper particular attention has been accorded short to medium term trends, that is, those likely to emerge in the office property marketplace over the coming two decades. Further, the paper considers the question posed, in respect of the future of office property, in the context of developed western nations. The degree of commonality seen in these mature markets is such that generalisations may more appropriately and robustly be applied. Whilst some of the comments offered with respect to the target market may find application in other arenas, it is beyond the scope of this paper to explicitly consider highly heterogeneous markets. Given also the wide scope of this paper key drivers for change and their likely implications for the commercial office property market are identified at a global level (within the above established parameters). Accordingly, the focus is necessarily such that it serves to reflect overarching directions at a universal level (with the effect being that direct applicability to individual markets - when viewed in isolation on a geographic or property type specific basis – may not be fitting in all instances)
Resumo:
Use of Unmanned Aerial Vehicles (UAVs) in support of government applications has already seen significant growth and the potential for use of UAVs in commercial applications is expected to rapidly expand in the near future. However, the issue remains on how such automated or operator-controlled aircraft can be safely integrated into current airspace. If the goal of integration is to be realized, issues regarding safe separation in densely populated airspace must be investigated. This paper investigates automated separation management concepts in uncontrolled airspace that may help prepare for an expected growth of UAVs in Class G airspace. Not only are such investigations helpful for the UAV integration issue, the automated separation management concepts investigated by the authors can also be useful for the development of new or improved Air Traffic Control services in remote regions without any existing infrastructure. The paper will also provide an overview of the Smart Skies program and discuss the corresponding Smart Skies research and development effort to evaluate aircraft separation management algorithms using simulations involving realworld data communication channels, and verified against actual flight trials. This paper presents results from a unique flight test concept that uses real-time flight test data from Australia over existing commercial communication channels to a control center in Seattle for real-time separation management of actual and simulated aircraft. The paper also assesses the performance of an automated aircraft separation manager.
Resumo:
There are many studies that reveal the nature of design thinking and the nature of conceptual design as distinct from detailed or embodiment design. The results can assist in our understanding of how the process of design can be supported and how new technologies can be introduced into the workplace. Existing studies provide limited information about the nature of collaborative design as it takes place on the ground and in the actual working context. How to provide appropriate and effective of support for collaborative design information sharing across companies, countries and heterogeneous computer systems is a key issue. As data are passed between designers and the computer systems they employ, many exchanges are made. These exchanges may be used to establish measures of the benefits that new support systems can bring. Collaboration support tools represent a fast growing section of the commercial software market place and a reasonable range of products are available. Many of them offer significant application to design for the support of distributed meetings by the provision of video and audio communications and the sharing of information, including collaborative sketching. The tools that specifically support 3D models and other very design specific features are less common and many of those are in prototype stages of development. A key question is to find viable ways of combining design information visualisation support with the collaboration support technologies that can be seen today. When collaborating, different views will need to be accessible at different times to all the collaborators. The architects may want to explain some ideas on their model, the structural engineers on their model and so on. However, there are issues of ownership when the structural engineer wants to manipulate the architect’s model and vice versa. The modes of working, synchronous or asynchronous may have a bearing as in a synchronous session there is control of what is happening.
Resumo:
In architectural design and the construction industry, there is insufficient evidence about the way designers collaborate in their normal working environments using both traditional and digital media. It is this gap in empirical evidence that the CRC project, “Team Collaboration in High Bandwidth Virtual Environments” addresses. The project is primarily, but not exclusively, concerned with the conceptual stages of design carried out by professional designers working in different offices. The aim is to increase opportunities for communication and interaction between people in geographically distant locations in order to improve the quality of collaboration. In order to understand the practical implications of introducing new digital tools on working practices, research into how designers work collaboratively using both traditional and digital media is being undertaken. This will involve a series of empirical studies in the work places of the industry partners in the project. The studies of collaboration processes will provide empirical results that will lead to more effective use of virtual environments in design and construction processes. The report describes the research approach, the industry study, the methods for data collection and analysis and the foundation research methodologies. A distinctive aspect is that the research has been devised to enable field studies to be undertaken in a live industrial environment where the participant designers carry out real projects alongside their colleagues and in familiar locations. There are two basic research objectives: one is to obtain evidence about design practice that will inform the architecture and construction industries about the impact and potential benefit of using digital collaboration technologies; the second is to add to long term research knowledge of human cognitive and behavioural processes based on real world data. In order to achieve this, the research methods must be able to acquire a rich and heterogeneous set of data from design activities as they are carried out in the normal working environment. This places different demands upon the data collection and analysis methods to those of laboratory studies where controlled conditions are required. In order to address this, the research approach that has been adopted is ethnographic in nature and case study-based. The plan is to carry out a series of indepth studies in order to provide baseline results for future research across a wider community of user groups. An important objective has been to develop a methodology that will produce valid, significant and transferable results. The research will contribute to knowledge about how architectural design and the construction industry may benefit from the introduction of leading edge collaboration technologies. The outcomes will provide a sound foundation for the production of guidelines for the assessment of high bandwidth tools and their future deployment. The knowledge will form the basis for the specification of future collaboration products and collaboration processes. This project directly addresses the industry-identified focus on cultural change, image, e-project management, and innovative methods.
Resumo:
The over represented number of novice drivers involved in crashes is alarming. Driver training is one of the interventions aimed at mitigating the number of crashes that involve young drivers. To our knowledge, Advanced Driver Assistance Systems (ADAS) have never been comprehensively used in designing an intelligent driver training system. Currently, there is a need to develop and evaluate ADAS that could assess driving competencies. The aim is to develop an unsupervised system called Intelligent Driver Training System (IDTS) that analyzes crash risks in a given driving situation. In order to design a comprehensive IDTS, data is collected from the Driver, Vehicle and Environment (DVE), synchronized and analyzed. The first implementation phase of this intelligent driver training system deals with synchronizing multiple variables acquired from DVE. RTMaps is used to collect and synchronize data like GPS, vehicle dynamics and driver head movement. After the data synchronization, maneuvers are segmented out as right turn, left turn and overtake. Each maneuver is composed of several individual tasks that are necessary to be performed in a sequential manner. This paper focuses on turn maneuvers. Some of the tasks required in the analysis of ‘turn’ maneuver are: detect the start and end of the turn, detect the indicator status change, check if the indicator was turned on within a safe distance and check the lane keeping during the turn maneuver. This paper proposes a fusion and analysis of heterogeneous data, mainly involved in driving, to determine the risk factor of particular maneuvers within the drive. It also explains the segmentation and risk analysis of the turn maneuver in a drive.
Resumo:
It has been argued that intentional first year curriculum design has a critical role to play in enhancing first year student engagement, success and retention (Kift, 2008). A fundamental first year curriculum objective should be to assist students to make the successful transition to assessment in higher education. Scott (2006) has identified that ‘relevant, consistent and integrated assessment … [with] prompt and constructive feedback’ are particularly relevant to student retention generally; while Nicol (2007) suggests that ‘lack of clarity regarding expectations in the first year, low levels of teacher feedback and poor motivation’ are key issues in the first year. At the very minimum, if we expect first year students to become independent and self-managing learners, they need to be supported in their early development and acquisition of tertiary assessment literacies (Orrell, 2005). Critical to this attainment is the necessity to alleviate early anxieties around assessment information, instructions, guidance, and performance. This includes, for example: inducting students thoroughly into the academic languages and assessment genres they will encounter as the vehicles for evidencing learning success; and making expectations about the quality of this evidence clear. Most importantly, students should receive regular formative feedback of their work early in their program of study to aid their learning and to provide information to both students and teachers on progress and achievement. Leveraging research conducted under an ALTC Senior Fellowship that has sought to articulate a research-based 'transition pedagogy' (Kift & Nelson, 2005) – a guiding philosophy for intentional first year curriculum design and support that carefully scaffolds and mediates the first year learning experience for contemporary heterogeneous cohorts – this paper will discuss theoretical and practical strategies and examples that should be of assistance in implementing good assessment and feedback practices across a range of disciplines in the first year.