894 resultados para data integration


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to demonstrate the validity of using Gaussian mixture models (GMM) for representing probabilistic distributions in a decentralised data fusion (DDF) framework. GMMs are a powerful and compact stochastic representation allowing efficient communication of feature properties in large scale decentralised sensor networks. It will be shown that GMMs provide a basis for analytical solutions to the update and prediction operations for general Bayesian filtering. Furthermore, a variant on the Covariance Intersect algorithm for Gaussian mixtures will be presented ensuring a conservative update for the fusion of correlated information between two nodes in the network. In addition, purely visual sensory data will be used to show that decentralised data fusion and tracking of non-Gaussian states observed by multiple autonomous vehicles is feasible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we apply the incremental EM method to Bayesian Network Classifiers to learn and interpret hyperspectral sensor data in robotic planetary missions. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. Many spacecraft carry spectroscopic equipment as wavelengths outside the visible light in the electromagnetic spectrum give much greater information about an object. The algorithm used is an extension to the standard Expectation Maximisation (EM). The incremental method allows us to learn and interpret the data as they become available. Two Bayesian network classifiers were tested: the Naive Bayes, and the Tree-Augmented-Naive Bayes structures. Our preliminary experiments show that incremental learning with unlabelled data can improve the accuracy of the classifier.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We assess the performance of an exponential integrator for advancing stiff, semidiscrete formulations of the unsaturated Richards equation in time. The scheme is of second order and explicit in nature but requires the action of the matrix function φ(A) where φ(z) = [exp(z) - 1]/z on a suitability defined vector v at each time step. When the matrix A is large and sparse, φ(A)v can be approximated by Krylov subspace methods that require only matrix-vector products with A. We prove that despite the use of this approximation the scheme remains second order. Furthermore, we provide a practical variable-stepsize implementation of the integrator by deriving an estimate of the local error that requires only a single additional function evaluation. Numerical experiments performed on two-dimensional test problems demonstrate that this implementation outperforms second-order, variable-stepsize implementations of the backward differentiation formulae.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of instructors have recently adopted social network sites (SNSs) for learning. However, the learning design of SNSs often remains at a preliminary level similar to a personal log book because it does not properly include reflective learning elements such as individual reflection and collaboration. This article looks at the reflective learning process and the public writing process as a way of improving the quality of reflective learning on SNSs. It proposes a reflective learning model on SNSs based on two key pedagogical concepts for social networking: individual expression and collaborative connection. It is expected that the model would be helpful for instructors in designing a reflective learning process on SNSs in an effective and flexible way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

QUT Library and the High Performance Computing and Research Support (HPC) Team have been collaborating on developing and delivering a range of research support services, including those designed to assist researchers to manage their data. QUT’s Management of Research Data policy has been available since 2010 and is complemented by the Data Management Guidelines and Checklist. QUT has partnered with the Australian Research Data Service (ANDS) on a number of projects including Seeding the Commons, Metadata Hub (with Griffith University) and the Data Capture program. The HPC Team has also been developing the QUT Research Data Repository based on the Architecta Mediaflux system and have run several pilots with faculties. Library and HPC staff have been trained in the principles of research data management and are providing a range of research data management seminars and workshops for researchers and HDR students.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland Department of Main Roads uses Weigh-in-Motion (WiM) devices to covertly monitor (at highway speed) axle mass, axle configurations and speed of heavy vehicles on the road network. Such data is critical for the planning and design of the road network. Some of the data appears excessively variable. The current work considers the nature, magnitude and possible causes of WiM data variability. Over fifty possible causes of variation in WiM data have been identified in the literature. Data exploration has highlighted five basic types of variability specifically: ----- • cycling, both diurnal and annual;----- • consistent but unreasonable data;----- • data jumps;----- • variations between data from opposite sides of the one road; and ----- • non-systematic variations.----- This work is part of wider research into procedures to eliminate or mitigate the influence of WiM data variability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gradual authentication is a principle proposed by Meadows as a way to tackle denial-of-service attacks on network protocols by gradually increasing the confidence in clients before the server commits resources. In this paper, we propose an efficient method that allows a defending server to authenticate its clients gradually with the help of some fast-to-verify measures. Our method integrates hash-based client puzzles along with a special class of digital signatures supporting fast verification. Our hash-based client puzzle provides finer granularity of difficulty and is proven secure in the puzzle difficulty model of Chen et al. (2009). We integrate this with the fast-verification digital signature scheme proposed by Bernstein (2000, 2008). These schemes can be up to 20 times faster for client authentication compared to RSA-based schemes. Our experimental results show that, in the Secure Sockets Layer (SSL) protocol, fast verification digital signatures can provide a 7% increase in connections per second compared to RSA signatures, and our integration of client puzzles with client authentication imposes no performance penalty on the server since puzzle verification is a part of signature verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road safety is a major concern worldwide. Road safety will improve as road conditions and their effects on crashes are continually investigated. This paper proposes to use the capability of data mining to include the greater set of road variables for all available crashes with skid resistance values across the Queensland state main road network in order to understand the relationships among crash, traffic and road variables. This paper presents a data mining based methodology for the road asset management data to find out the various road properties that contribute unduly to crashes. The models demonstrate high levels of accuracy in predicting crashes in roads when various road properties are included. This paper presents the findings of these models to show the relationships among skid resistance, crashes, crash characteristics and other road characteristics such as seal type, seal age, road type, texture depth, lane count, pavement width, rutting, speed limit, traffic rates intersections, traffic signage and road design and so on.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developing safe and sustainable road systems is a common goal in all countries. Applications to assist with road asset management and crash minimization are sought universally. This paper presents a data mining methodology using decision trees for modeling the crash proneness of road segments using available road and crash attributes. The models quantify the concept of crash proneness and demonstrate that road segments with only a few crashes have more in common with non-crash roads than roads with higher crash counts. This paper also examines ways of dealing with highly unbalanced data sets encountered in the study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is commonly accepted that wet roads have higher risk of crash than dry roads; however, providing evidence to support this assumption presents some difficulty. This paper presents a data mining case study in which predictive data mining is applied to model the skid resistance and crash relationship to search for discernable differences in the probability of wet and dry road segments having crashes based on skid resistance. The models identify an increased probability of wet road segments having crashes for mid-range skid resistance values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE) is a research programme that aims to uncover the factors that initiate, hinder and facilitate the process of emergence of new economic activities and organizations. It is widely acknowledged that entrepreneurship is one of the most important forces shaping changes in a country’s economic landscape (Baumol 1968; Birch 1987; Acs 1999). An understanding of the process by which new economic activity and business entities emerge is vital (Gartner 1993; Sarasvathy 2001). An important development in the study of ‘nascent entrepreneurs’ and ‘firms in gestation’ was the Panel Study of Entrepreneurial Dynamics (PSED) (Gartner et al. 2004) and its extensions in Argentina, Canada, Greece, the Netherlands, Norway and Sweden. Yet while PSED I is an important first step towards systematically studying new venture emergence, it represents just the beginning of a stream of nascent venture studies – most notably PSED II is currently being undertaken in the US (2005– 10) (Reynolds and Curtin 2008).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road crashes cost world and Australian society a significant proportion of GDP, affecting productivity and causing significant suffering for communities and individuals. This paper presents a case study that generates data mining models that contribute to understanding of road crashes by allowing examination of the role of skid resistance (F60) and other road attributes in road crashes. Predictive data mining algorithms, primarily regression trees, were used to produce road segment crash count models from the road and traffic attributes of crash scenarios. The rules derived from the regression trees provide evidence of the significance of road attributes in contributing to crash, with a focus on the evaluation of skid resistance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, well-established clinical therapeutic approaches for bone reconstruction are restricted to the transplantation of autografts and allografts, and the implantation of metal devices or ceramic-based implants to assist bone regeneration. Bone grafts possess osteoconductive and osteoinductive properties, however they are limited in access and availability and associated with donor site morbidity, haemorrhage, risk of infection, insufficient transplant integration, graft devitalisation, and subsequent resorption resulting in decreased mechanical stability. As a result, recent research focuses on the development of alternative therapeutic concepts. Analysing the tissue engineering literature it can be concluded that bone regeneration has become a focus area in the field. Hence, a considerable number of research groups and commercial entities work on the development of tissue engineered constructs for bone regeneration. However, bench to bedside translations are still infrequent as the process towards approval by regulatory bodies is protracted and costly, requiring both comprehensive in vitro and in vivo studies. In translational orthopaedic research, the utilisation of large preclinical animal models is a conditio sine qua non. Consequently, to allow comparison between different studies and their outcomes, it is essential that animal models, fixation devices, surgical procedures and methods of taking measurements are well standardized to produce reliable data pools as a base for further research directions. The following chapter reviews animal models of the weight-bearing lower extremity utilized in the field which include representations of fracture-healing, segmental bone defects, and fracture non-unions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Competitive markets are increasingly driving new initiatives for shorter cycle times resulting in increased overlapping of project phases. This, in turn, necessitates improving the interfaces between the different phases to be overlapped (integrated), thus allowing transfer of processes, information and knowledge from one individual or team to another. This transfer between phases, within and between projects, is one of the basic challenges to the philosophy of project management. To make the process transfer more transparent with minimal loss of momentum and project knowledge, this paper draws upon Total Quality Management (TQM) and Business Process Re-engineering (BPR) philosophies to develop a Best Practice Model for managing project phase integration. The paper presents the rationale behind the model development and outlines its two key parts; (1) Strategic Framework and (2) Implementation Plan. Key components of both the Strategic Framework and the Implementation Plan are presented and discussed.