237 resultados para Subpixel precision


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Piezoelectric composites comprising an active phase of ferroelectric ceramic and a polymer matrix have recently attracted numerous sensory applications. However, it remains a major challenge to further improve their electromechanical response for advanced applications such as precision control and monitoring systems. We hereby investigated the incorporation of graphene platelets (GnPs) and multi-walled carbon nanotubes (MWNTs), each with various weight fractions, into PZT (lead zirconate titanate)/epoxy composites to produce three-phase nanocomposites. The nanocomposite films show markedly improved piezoelectric coefficients and electromechanical responses (50%) besides an enhancement of ~200% in stiffness. Carbon nanomaterials strengthened the impact of electric field on the PZT particles by appropriately raising the electrical conductivity of epoxy. GnPs have been proved far more promising in improving the poling behavior and dynamic response than MWNTs. The superior dynamic sensitivity of GnP-reinforced composite may be caused by GnPs’ high load transfer efficiency arising from their two-dimensional geometry and good compatibility with the matrix. Reduced acoustic impedance mismatch resulted from the improved thermal conductance may also contribute to the higher sensitivity of GnP-reinforced composite. This research pointed out the potential of employing GnPs to develop highly sensitive piezoelectric composites for sensing applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose - Contemporary offshore Information System Development (ISD) outsourcing is becoming even more complex. Outsourcing partner has begun ‘re-outsourcing’ components of their projects to other outsourcing companies to minimize cost and gain efficiencies. This paper aims to explore intra-organizational Information Asymmetry of re-outsourced offshore ISD outsourcing projects. Design/methodology/approach - An online survey was conducted to get an overall view of Information Asymmetry between Principal and Agents (as per the Agency theory). Findings - Statistical analysis showed that there are significant differences between the Principal and Agent on clarity of requirements, common domain knowledge and communication effectiveness constructs, implying an unbalanced relationship between the parties. Moreover, our results showed that these three are significant measurement constructs of Information Asymmetry. Research limitations/implications - In our study we have only considered three main factors as common domain knowledge, clarity of requirements and communication effectiveness as three measurement constructs of Information Asymmetry. Therefore, researches are encouraged to test the proposed constructs further to increase its precision. Practical implications - Our analysis indicates significant differences in all three measurement constructs, implying the difficulties to ensure that the Agent is performing according to the requirements of the Principal. Using the Agency theory as theoretical view, this study sheds light on the best contract governing methods which minimize Information Asymmetry between the multiple partners within ISD outsourcing organizations. Originality/value - Currently, to the best of our knowledge, no study has undertaken research on Intra-organizational Information Asymmetry in re-outsourced offshore ISD outsourcing projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Despite the emerging use of treadmills integrated with pressure platforms as outcome tools in both clinical and research settings, published evidence regarding the measurement properties of these new systems is limited. This study evaluated the within– and between–day repeatability of spatial, temporal and vertical ground reaction forces measured by a treadmill system instrumented with a capacitance–based pressure platform. Methods Thirty three healthy adults (mean age, 21.5 ± 2.8 years; height, 168.4 ± 9.9 cm; and mass, 67.8 ± 18.6 kg), walked barefoot on a treadmill system (FDM–THM–S, Zebris Medical GmbH) on three separate occasions. For each testing session, participants set their preferred pace but were blinded to treadmill speed. Spatial (foot rotation, step width, stride and step length), temporal (stride and step times, duration of stance, swing and single and double support) and peak vertical ground reaction force variables were collected over a 30–second capture period, equating to an average of 52 ± 5 steps of steady–state walking. Testing was repeated one week following the initial trial and again, for a third time, 20 minutes later. Repeated measures ANOVAs within a generalized linear modelling framework were used to assess between–session differences in gait parameters. Agreement between gait parameters measured within the same day (session 2 and 3) and between days (session 1 and 2; 1 and 3) were evaluated using the 95% repeatability coefficient. Results There were statistically significant differences in the majority (14/16) of temporal, spatial and kinetic gait parameters over the three test sessions (P < .01). The minimum change that could be detected with 95% confidence ranged between 3% and 17% for temporal parameters, 14% and 33% for spatial parameters, and 4% and 20% for kinetic parameters between days. Within–day repeatability was similar to that observed between days. Temporal and kinetic gait parameters were typically more consistent than spatial parameters. The 95% repeatability coefficient for vertical force peaks ranged between ± 53 and ± 63 N. Conclusions The limits of agreement in spatial parameters and ground reaction forces for the treadmill system encompass previously reported changes with neuromuscular pathology and footwear interventions. These findings provide clinicians and researchers with an indication of the repeatability and sensitivity of the Zebris treadmill system to detect changes in common spatiotemporal gait parameters and vertical ground reaction forces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives This study introduces and assesses the precision of a standardized protocol for anthropometric measurement of the juvenile cranium using three-dimensional surface rendered models, for implementation in forensic investigation or paleodemographic research. Materials and methods A subset of multi-slice computed tomography (MSCT) DICOM datasets (n=10) of modern Australian subadults (birth—10 years) was accessed from the “Skeletal Biology and Forensic Anthropology Virtual Osteological Database” (n>1200), obtained from retrospective clinical scans taken at Brisbane children hospitals (2009–2013). The capabilities of Geomagic Design X™ form the basis of this study; introducing standardized protocols using triangle surface mesh models to (i) ascertain linear dimensions using reference plane networks and (ii) calculate the area of complex regions of interest on the cranium. Results The protocols described in this paper demonstrate high levels of repeatability between five observers of varying anatomical expertise and software experience. Intra- and inter-observer error was indiscernible with total technical error of measurement (TEM) values ≤0.56 mm, constituting <0.33% relative error (rTEM) for linear measurements; and a TEM value of ≤12.89 mm2, equating to <1.18% (rTEM) of the total area of the anterior fontanelle and contiguous sutures. Conclusions Exploiting the advances of MSCT in routine clinical assessment, this paper assesses the application of this virtual approach to acquire highly reproducible morphometric data in a non-invasive manner for human identification and population studies in growth and development. The protocols and precision testing presented are imperative for the advancement of “virtual anthropology” into routine Australian medico-legal death investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present SMART (Sequence Matching Across Route Traversals): a vision- based place recognition system that uses whole image matching techniques and odometry information to improve the precision-recall performance, latency and general applicability of the SeqSLAM algorithm. We evaluate the system’s performance on challenging day and night journeys over several kilometres at widely varying vehicle velocities from 0 to 60 km/h, compare performance to the current state-of- the-art SeqSLAM algorithm, and provide parameter studies that evaluate the effectiveness of each system component. Using 30-metre sequences, SMART achieves place recognition performance of 81% recall at 100% precision, outperforming SeqSLAM, and is robust to significant degradations in odometry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a pose estimation approach that is resilient to typical sensor failure and suitable for low cost agricultural robots. Guiding large agricultural machinery with highly accurate GPS/INS systems has become standard practice, however these systems are inappropriate for smaller, lower-cost robots. Our positioning system estimates pose by fusing data from a low-cost global positioning sensor, low-cost inertial sensors and a new technique for vision-based row tracking. The results first demonstrate that our positioning system will accurately guide a robot to perform a coverage task across a 6 hectare field. The results then demonstrate that our vision-based row tracking algorithm improves the performance of the positioning system despite long periods of precision correction signal dropout and intermittent dropouts of the entire GPS sensor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new multi-scale place recognition system inspired by the recent discovery of overlapping, multi-scale spatial maps stored in the rodent brain. By training a set of Support Vector Machines to recognize places at varying levels of spatial specificity, we are able to validate spatially specific place recognition hypotheses against broader place recognition hypotheses without sacrificing localization accuracy. We evaluate the system in a range of experiments using cameras mounted on a motorbike and a human in two different environments. At 100% precision, the multiscale approach results in a 56% average improvement in recall rate across both datasets. We analyse the results and then discuss future work that may lead to improvements in both robotic mapping and our understanding of sensory processing and encoding in the mammalian brain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a new method of indexing and searching large binary signature collections to efficiently find similar signatures, addressing the scalability problem in signature search. Signatures offer efficient computation with acceptable measure of similarity in numerous applications. However, performing a complete search with a given search argument (a signature) requires a Hamming distance calculation against every signature in the collection. This quickly becomes excessive when dealing with large collections, presenting issues of scalability that limit their applicability. Our method efficiently finds similar signatures in very large collections, trading memory use and precision for greatly improved search speed. Experimental results demonstrate that our approach is capable of finding a set of nearest signatures to a given search argument with a high degree of speed and fidelity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.