945 resultados para Precision Xtra®


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a pose estimation approach that is resilient to typical sensor failure and suitable for low cost agricultural robots. Guiding large agricultural machinery with highly accurate GPS/INS systems has become standard practice, however these systems are inappropriate for smaller, lower-cost robots. Our positioning system estimates pose by fusing data from a low-cost global positioning sensor, low-cost inertial sensors and a new technique for vision-based row tracking. The results first demonstrate that our positioning system will accurately guide a robot to perform a coverage task across a 6 hectare field. The results then demonstrate that our vision-based row tracking algorithm improves the performance of the positioning system despite long periods of precision correction signal dropout and intermittent dropouts of the entire GPS sensor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new multi-scale place recognition system inspired by the recent discovery of overlapping, multi-scale spatial maps stored in the rodent brain. By training a set of Support Vector Machines to recognize places at varying levels of spatial specificity, we are able to validate spatially specific place recognition hypotheses against broader place recognition hypotheses without sacrificing localization accuracy. We evaluate the system in a range of experiments using cameras mounted on a motorbike and a human in two different environments. At 100% precision, the multiscale approach results in a 56% average improvement in recall rate across both datasets. We analyse the results and then discuss future work that may lead to improvements in both robotic mapping and our understanding of sensory processing and encoding in the mammalian brain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a new method of indexing and searching large binary signature collections to efficiently find similar signatures, addressing the scalability problem in signature search. Signatures offer efficient computation with acceptable measure of similarity in numerous applications. However, performing a complete search with a given search argument (a signature) requires a Hamming distance calculation against every signature in the collection. This quickly becomes excessive when dealing with large collections, presenting issues of scalability that limit their applicability. Our method efficiently finds similar signatures in very large collections, trading memory use and precision for greatly improved search speed. Experimental results demonstrate that our approach is capable of finding a set of nearest signatures to a given search argument with a high degree of speed and fidelity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications. To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases. Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities. This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing. The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied. For fixing the scalar ambiguity, an error probability controllable rounding method is proposed. The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter. It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints. The presented methodology is tested over seven baselines of around 100 km from USA CORS network. The results show that the new widelane AR scheme can obtain the 99.4 % successful fixing rate with 0.6 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 0.8 %. In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliability of carrier phase ambiguity resolution (AR) of an integer least-squares (ILS) problem depends on ambiguity success rate (ASR), which in practice can be well approximated by the success probability of integer bootstrapping solutions. With the current GPS constellation, sufficiently high ASR of geometry-based model can only be achievable at certain percentage of time. As a result, high reliability of AR cannot be assured by the single constellation. In the event of dual constellations system (DCS), for example, GPS and Beidou, which provide more satellites in view, users can expect significant performance benefits such as AR reliability and high precision positioning solutions. Simply using all the satellites in view for AR and positioning is a straightforward solution, but does not necessarily lead to high reliability as it is hoped. The paper presents an alternative approach that selects a subset of the visible satellites to achieve a higher reliability performance of the AR solutions in a multi-GNSS environment, instead of using all the satellites. Traditionally, satellite selection algorithms are mostly based on the position dilution of precision (PDOP) in order to meet accuracy requirements. In this contribution, some reliability criteria are introduced for GNSS satellite selection, and a novel satellite selection algorithm for reliable ambiguity resolution (SARA) is developed. The SARA algorithm allows receivers to select a subset of satellites for achieving high ASR such as above 0.99. Numerical results from a simulated dual constellation cases show that with the SARA procedure, the percentages of ASR values in excess of 0.99 and the percentages of ratio-test values passing the threshold 3 are both higher than those directly using all satellites in view, particularly in the case of dual-constellation, the percentages of ASRs (>0.99) and ratio-test values (>3) could be as high as 98.0 and 98.5 % respectively, compared to 18.1 and 25.0 % without satellite selection process. It is also worth noting that the implementation of SARA is simple and the computation time is low, which can be applied in most real-time data processing applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been significant research in the field of database watermarking recently. However, there has not been sufficient attention given to the requirement of providing reversibility (the ability to revert back to original relation from watermarked relation) and blindness (not needing the original relation for detection purpose) at the same time. This model has several disadvantages over reversible and blind watermarking (requiring only the watermarked relation and secret key from which the watermark is detected and the original relation is restored) including the inability to identify the rightful owner in case of successful secondary watermarking, the inability to revert the relation to the original data set (required in high precision industries) and the requirement to store the unmarked relation at a secure secondary storage. To overcome these problems, we propose a watermarking scheme that is reversible as well as blind. We utilize difference expansion on integers to achieve reversibility. The major advantages provided by our scheme are reversibility to a high quality original data set, rightful owner identification, resistance against secondary watermarking attacks, and no need to store the original database at a secure secondary storage. We have implemented our scheme and results show the success rate is limited to 11% even when 48% tuples are modified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been significant research in the field of database watermarking recently. However, there has not been sufficient attention given to the requirement of providing reversibility (the ability to revert back to original relation from watermarked relation) and blindness (not needing the original relation for detection purpose) at the same time. This model has several disadvantages over reversible and blind watermarking (requiring only the watermarked relation and secret key from which the watermark is detected and the original relation is restored) including the inability to identify the rightful owner in case of successful secondary watermarking, the inability to revert the relation to the original data set (required in high precision industries) and the requirement to store the unmarked relation at a secure secondary storage. To overcome these problems, we propose a watermarking scheme that is reversible as well as blind. We utilize difference expansion on integers to achieve reversibility. The major advantages provided by our scheme are reversibility to a high quality original data set, rightful owner identification, resistance against secondary watermarking attacks, and no need to store the original database at a secure secondary storage. We have implemented our scheme and results show the success rate is limited to 11% even when 48% tuples are modified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Database watermarking has received significant research attention in the current decade. Although, almost all watermarking models have been either irreversible (the original relation cannot be restored from the watermarked relation) and/or non-blind (requiring original relation to detect the watermark in watermarked relation). This model has several disadvantages over reversible and blind watermarking (requiring only watermarked relation and secret key from which the watermark is detected and original relation is restored) including inability to identify rightful owner in case of successful secondary watermarking, inability to revert the relation to original data set (required in high precision industries) and requirement to store unmarked relation at a secure secondary storage. To overcome these problems, we propose a watermarking scheme that is reversible as well as blind. We utilize difference expansion on integers to achieve reversibility. The major advantages provided by our scheme are reversibility to high quality original data set, rightful owner identification, resistance against secondary watermarking attacks, and no need to store original database at a secure secondary storage.