978 resultados para geospatial data
Resumo:
This paper investigates the use of the FAB-MAP appearance-only SLAM algorithm as a method for performing visual data association for RatSLAM, a semi-metric full SLAM system. While both systems have shown the ability to map large (60-70km) outdoor locations of approximately the same scale, for either larger areas or across longer time periods both algorithms encounter difficulties with false positive matches. By combining these algorithms using a mapping between appearance and pose space, both false positives and false negatives generated by FAB-MAP are significantly reduced during outdoor mapping using a forward-facing camera. The hybrid FAB-MAP-RatSLAM system developed demonstrates the potential for successful SLAM over large periods of time.
Resumo:
As network capacity has increased over the past decade, individuals and organisations have found it increasingly appealing to make use of remote services in the form of service-oriented architectures and cloud computing services. Data processed by remote services, however, is no longer under the direct control of the individual or organisation that provided the data, leaving data owners at risk of data theft or misuse. This paper describes a model by which data owners can control the distribution and use of their data throughout a dynamic coalition of service providers using digital rights management technology. Our model allows a data owner to establish the trustworthiness of every member of a coalition employed to process data, and to communicate a machine-enforceable usage policy to every such member.
Resumo:
Purpose – The purpose of this paper is to examine the buyer awareness and acceptance of environmental and energy efficiency measures in the New Zealand residential property markets. This study aims to provide a greater understanding of consumer behaviour in the residential property market in relation to green housing issues ---------- Design/methodology/approach – The paper is based on an extensive survey of Christchurch real estate offices and was designed to gather data on the factors that were considered important by buyers in the residential property market. The survey was designed to allow these factors to be analysed on a socio-economic basis and to compare buyer behaviour based on property values. ---------- Findings – The results show that regardless of income levels, buyers still consider that the most important factor in the house purchase decision is the location of the property and price. Although the awareness of green housing issues and energy efficiency in housing is growing in the residential property market, it is only a major consideration for young and older buyers in the high income brackets and is only of some importance for all other buyer sectors of the residential property market. Many of the voluntary measures introduced by Governments to improve the energy efficiency of residential housing are still not considered important by buyers, indicating that a more mandatory approach may have to be undertaken to improve energy efficiency in the established housing market, as these measures are not valued by the buyer. ---------- Originality/value – The paper confirms the variations in real estate buyer behaviour across the full range of residential property markets and the acceptance and awareness of green housing issues and measures. These results would be applicable to most established and transparent residential property markets.
Dynamic analysis of on-board mass data to determine tampering in heavy vehicle on-board mass systems
Resumo:
Transport Certification Australia Limited, jointly with the National Transport Commission, has undertaken a project to investigate the feasibility of on-board mass monitoring (OBM) devices for regulatory purposes. OBM increases jurisdictional confidence in operational heavy vehicle compliance. This paper covers technical issues regarding potential use of dynamic data from OBM systems to indicate that tampering has occurred. Tamper-evidence and accuracy of current OBM systems needed to be determined before any regulatory schemes were put in place for its use. Tests performed to determine potential for, and ease of, tampering. An algorithm was developed to detect tamper events. Its results are detailed.
Resumo:
We propose a digital rights management approach for sharing electronic health records for research purposes and argue advantages of the approach. We give an outline of our implementation, discuss challenges that we faced and future directions.
Resumo:
The study described in this paper developed a model of animal movement, which explicitly recognised each individual as the central unit of measure. The model was developed by learning from a real dataset that measured and calculated, for individual cows in a herd, their linear and angular positions and directional and angular speeds. Two learning algorithms were implemented: a Hidden Markov model (HMM) and a long-term prediction algorithm. It is shown that a HMM can be used to describe the animal's movement and state transition behaviour within several “stay” areas where cows remained for long periods. Model parameters were estimated for hidden behaviour states such as relocating, foraging and bedding. For cows’ movement between the “stay” areas a long-term prediction algorithm was implemented. By combining these two algorithms it was possible to develop a successful model, which achieved similar results to the animal behaviour data collected. This modelling methodology could easily be applied to interactions of other animal species.
Resumo:
We present the design and deployment results for PosNet - a large-scale, long-duration sensor network that gathers summary position and status information from mobile nodes. The mobile nodes have a fixed-sized memory buffer to which position data is added at a constant rate, and from which data is downloaded at a non-constant rate. We have developed a novel algorithm that performs online summarization of position data within the buffer, where the algorithm naturally accommodates data input and output rate mismatch, and also provides a delay-tolerant approach to data transport. The algorithm has been extensively tested in a large-scale long-duration cattle monitoring and control application.
Resumo:
Emerging data streaming applications in Wireless Sensor Networks require reliable and energy-efficient Transport Protocols. Our recent Wireless Sensor Network deployment in the Burdekin delta, Australia, for water monitoring [T. Le Dinh, W. Hu, P. Sikka, P. Corke, L. Overs, S. Brosnan, Design and deployment of a remote robust sensor network: experiences from an outdoor water quality monitoring network, in: Second IEEE Workshop on Practical Issues in Building Sensor Network Applications (SenseApp 2007), Dublin, Ireland, 2007] is one such example. This application involves streaming sensed data such as pressure, water flow rate, and salinity periodically from many scattered sensors to the sink node which in turn relays them via an IP network to a remote site for archiving, processing, and presentation. While latency is not a primary concern in this class of application (the sampling rate is usually in terms of minutes or hours), energy-efficiency is. Continuous long-term operation and reliable delivery of the sensed data to the sink are also desirable. This paper proposes ERTP, an Energy-efficient and Reliable Transport Protocol for Wireless Sensor Networks. ERTP is designed for data streaming applications, in which sensor readings are transmitted from one or more sensor sources to a base station (or sink). ERTP uses a statistical reliability metric which ensures the number of data packets delivered to the sink exceeds the defined threshold. Our extensive discrete event simulations and experimental evaluations show that ERTP is significantly more energyefficient than current approaches and can reduce energy consumption by more than 45% when compared to current approaches. Consequently, sensor nodes are more energy-efficient and the lifespan of the unattended WSN is increased.
Resumo:
In the past few years, numerous data collection protocols have been developed for wireless sensor networks (WSNs). However, there has been no comparison of their relative performance in realistic environments. Here we report the results of an empirical study using a Fleck3 sensor network testbed for four different data collection protocols: One phase pull Directed Diffusion (DD), Expected Number of Transmissions (ETX), ETX with explicit acknowledgment (ETX-eAck), and ETX with implicit acknowledgment (ETX-iAck). Our empirical study provides useful insights for future sensor network deployments. When the required application end-to-end reliability is not strict (e.g., 70%) and link quality is good, DD and ETX are the best options because of their simplicity and low routing overhead. Both ETX-eAck and ETX-iAck achieve more than 90% end-to-end reliability when the link quality is reasonable (less than 25% packet loss). When the link quality is good, ETX-iAck introduces significantly less routing overhead (up to 50%) than ETX-eAck. However, if the radio transceiver supports variable packet length, ETX-eAck can outperform ETX-iAck when the link quality is poor. The important message from this paper is that choice of data collection protocol should come after the operating environment is understood. This understanding must include the characteristics of the radio transceiver, and link loss statistics from a long-term (across seasons and weather variation) radio survey of the site.
Resumo:
In this paper we present a novel platform for underwater sensor networks to be used for long-term monitoring of coral reefs and �sheries. The sensor network consists of static and mobile underwater sensor nodes. The nodes communicate point-to-point using a novel high-speed optical communication system integrated into the TinyOS stack, and they broadcast using an acoustic protocol integrated in the TinyOS stack. The nodes have a variety of sensing capabilities, including cameras, water temperature, and pressure. The mobile nodes can locate and hover above the static nodes for data muling, and they can perform network maintenance functions such as deployment, relocation, and recovery. In this paper we describe the hardware and software architecture of this underwater sensor network. We then describe the optical and acoustic networking protocols and present experimental networking and data collected in a pool, in rivers, and in the ocean. Finally, we describe our experiments with mobility for data muling in this network.
Resumo:
Habitat models are widely used in ecology, however there are relatively few studies of rare species, primarily because of a paucity of survey records and lack of robust means of assessing accuracy of modelled spatial predictions. We investigated the potential of compiled ecological data in developing habitat models for Macadamia integrifolia, a vulnerable mid-stratum tree endemic to lowland subtropical rainforests of southeast Queensland, Australia. We compared performance of two binomial models—Classification and Regression Trees (CART) and Generalised Additive Models (GAM)—with Maximum Entropy (MAXENT) models developed from (i) presence records and available absence data and (ii) developed using presence records and background data. The GAM model was the best performer across the range of evaluation measures employed, however all models were assessed as potentially useful for informing in situ conservation of M. integrifolia, A significant loss in the amount of M. integrifolia habitat has occurred (p < 0.05), with only 37% of former habitat (pre-clearing) remaining in 2003. Remnant patches are significantly smaller, have larger edge-to-area ratios and are more isolated from each other compared to pre-clearing configurations (p < 0.05). Whilst the network of suitable habitat patches is still largely intact, there are numerous smaller patches that are more isolated in the contemporary landscape compared with their connectedness before clearing. These results suggest that in situ conservation of M. integrifolia may be best achieved through a landscape approach that considers the relative contribution of small remnant habitat fragments to the species as a whole, as facilitating connectivity among the entire network of habitat patches.
Resumo:
A teaching and learning development project is currently under way at Queens-land University of Technology to develop advanced technology videotapes for use with the delivery of structural engineering courses. These tapes consist of integrated computer and laboratory simulations of important concepts, and behaviour of structures and their components for a number of structural engineering subjects. They will be used as part of the regular lectures and thus will not only improve the quality of lectures and learning environment, but also will be able to replace the ever-dwindling laboratory teaching in these subjects. The use of these videotapes, developed using advanced computer graphics, data visualization and video technologies, will enrich the learning process of the current diverse engineering student body. This paper presents the details of this new method, the methodology used, the results and evaluation in relation to one of the structural engineering subjects, steel structures.
Resumo:
Introduction - The planning for healthy cities faces significant challenges due to lack of effective information, systems and a framework to organise that information. Such a framework is critical in order to make accessible and informed decisions for planning healthy cities. The challenges for planning healthy cities have been magnified by the rise of the healthy cities movement, as a result of which, there have been more frequent calls for localised, collaborative and knowledge-based decisions. Some studies have suggested that the use of a ‘knowledge-based’ approach to planning will enhance the accuracy and quality decision-making by improving the availability of data and information for health service planners and may also lead to increased collaboration between stakeholders and the community. A knowledge-based or evidence-based approach to decision-making can provide an ‘out-of-the-box’ thinking through the use of technology during decision-making processes. Minimal research has been conducted in this area to date, especially in terms of evaluating the impact of adopting knowledge-based approach on stakeholders, policy-makers and decision-makers within health planning initiatives. Purpose – The purpose of the paper is to present an integrated method that has been developed to facilitate a knowledge-based decision-making process to assist health planning Methodology – Specifically, the paper describes the participatory process that has been adopted to develop an online Geographic Information System (GIS)-based Decision Support System (DSS) for health planners. Value – Conceptually, it is an application of Healthy Cities and Knowledge Cities approaches which are linked together. Specifically, it is a unique settings-based initiative designed to plan for and improve the health capacity of Logan-Beaudesert area, Australia. This setting-based initiative is named as the Logan-Beaudesert Health Coalition (LBHC). Practical implications - The paper outlines the application of a knowledge-based approach to the development of a healthy city. Also, it focuses on the need for widespread use of this approach as a tool for enhancing community-based health coalition decision making processes.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Road agencies require comprehensive, relevan and quality data describing their road assets to support their investment decisions. An investment decision support system for raod maintenance and rehabilitation mainly comprise three important supporting elements namely: road asset data, decision support tools and criteria for decision-making. Probability-based methods have played a crucial role in helping decision makers understand the relationship among road related data, asset performance and uncertainties in estimating budgets/costs for road management investment. This paper presents applications of the probability-bsed method for road asset management.