843 resultados para Tracking and trailing.
Resumo:
The mean shift tracker has achieved great success in visual object tracking due to its efficiency being nonparametric. However, it is still difficult for the tracker to handle scale changes of the object. In this paper, we associate a scale adaptive approach with the mean shift tracker. Firstly, the target in the current frame is located by the mean shift tracker. Then, a feature point matching procedure is employed to get the matched pairs of the feature point between target regions in the current frame and the previous frame. We employ FAST-9 corner detector and HOG descriptor for the feature matching. Finally, with the acquired matched pairs of the feature point, the affine transformation between target regions in the two frames is solved to obtain the current scale of the target. Experimental results show that the proposed tracker gives satisfying results when the scale of the target changes, with a good performance of efficiency.
Resumo:
The DeLone and McLean (D&M) model (2003) has been broadly used and generally recognised as a useful model for gauging the success of IS implementations. However, it is not without limitations. In this study, we evaluate a model that extends the D&M model and attempts to address some of it slimitations by providing a more complete measurement model of systems success. To that end, we augment the D&M (2003) model and include three variables: business value, institutional trust, and future readiness. We propose that the addition of these variables allows systems success to be assessed at both the systems level and the business level. Consequently, we develop a measurement model rather than a structural or predictive model of systems success.
Resumo:
This research utilised software developed for managing the Australian sugar industry's cane rail transport operations and GPS data used to track locomotives to ensure safe operation of the railway system to improve transport operations. As a result, time usage in the sugarcane railway can now be summarised and locomotive arrival time to sidings and mills can be predicted. This information will help the development of more efficient run schedules and enable mill staff and harvesters to better plan their shifts ahead, enabling cost reductions through better use of available time.
Resumo:
The aim of this study was to develop a new method for quantifying intersegmental motion of the spine in an instrumented motion segment L4–L5 model using ultrasound image post-processing combined with an electromagnetic device. A prospective test–retest design was employed, combined with an evaluation of stability and within- and between-day intra-tester reliability during forward bending by 15 healthy male patients. The accuracy of the measurement system using the model was calculated to be ± 0.9° (standard deviation = 0.43) over a 40° range and ± 0.4 cm (standard deviation = 0.28) over 1.5 cm. The mean composite range of forward bending was 15.5 ± 2.04° during a single trial (standard error of the mean = 0.54, coefficient of variation = 4.18). Reliability (intra-class correlation coefficient = 2.1) was found to be excellent for both within-day measures (0.995–0.999) and between-day measures (0.996–0.999). Further work is necessary to explore the use of this approach in the evaluation of biomechanics, clinical assessments and interventions.
Resumo:
As connectivity analyses become more popular, claims are often made about how the brain's anatomical networks depend on age, sex, or disease. It is unclear how results depend on tractography methods used to compute fiber networks. We applied 11 tractography methods to high angular resolution diffusion images of the brain (4-Tesla 105-gradient HARDI) from 536 healthy young adults. We parcellated 70 cortical regions, yielding 70×70 connectivity matrices, encoding fiber density. We computed popular graph theory metrics, including network efficiency, and characteristic path lengths. Both metrics were robust to the number of spherical harmonics used to model diffusion (4th-8th order). Age effects were detected only for networks computed with the probabilistic Hough transform method, which excludes smaller fibers. Sex and total brain volume affected networks measured with deterministic, tensor-based fiber tracking but not with the Hough method. Each tractography method includes different fibers, which affects inferences made about the reconstructed networks.
Resumo:
This paper presents a novel vision-based underwater robotic system for the identification and control of Crown-Of-Thorns starfish (COTS) in coral reef environments. COTS have been identified as one of the most significant threats to Australia's Great Barrier Reef. These starfish literally eat coral, impacting large areas of reef and the marine ecosystem that depends on it. Evidence has suggested that land-based nutrient runoff has accelerated recent outbreaks of COTS requiring extensive use of divers to manually inject biological agents into the starfish in an attempt to control population numbers. Facilitating this control program using robotics is the goal of our research. In this paper we introduce a vision-based COTS detection and tracking system based on a Random Forest Classifier (RFC) trained on images from underwater footage. To track COTS with a moving camera, we embed the RFC in a particle filter detector and tracker where the predicted class probability of the RFC is used as an observation probability to weight the particles, and we use a sparse optical flow estimation for the prediction step of the filter. The system is experimentally evaluated in a realistic laboratory setup using a robotic arm that moves a camera at different speeds and heights over a range of real-size images of COTS in a reef environment.
Resumo:
The paper presents a new approach to improve the detection and tracking performance of a track-while-scan (TWS) radar. The contribution consists of three parts. In Part 1 the scope of various papers in this field is reviewed. In Part 2, a new approach for integrating the detection and tracking functions is presented. It shows how a priori information from the TWS computer can be used to improve detection. A new multitarget tracking algorithm has also been developed. It is specifically oriented towards solving the combinatorial problems in multitarget tracking. In Part 3, analytical derivations are presented for quantitatively assessing, a priori, the performance of a track-while-scan radar system (true track initiation, false track initiation, true track continuation and false track deletion characteristics). Simulation results are also shown.
Resumo:
The paper presents, in three parts, a new approach to improve the detection and tracking performance of a track-while-scan radar. Part 1 presents a review of the current status of the subject. Part 2 details the new approach. It shows how a priori information provided by the tracker can be used to improve detection. It also presents a new multitarget tracking algorithm. In the present Part, analytical derivations are presented for assessing, a priori, the performance of the TWS radar system. True track initiation, false track initiation, true track continuation and false track deletion characteristics have been studied. It indicates how the various thresholds can be chosen by the designer to optimise performance. Simulation results are also presented.
Resumo:
he paper presents, in three parts, a new approach to improve the detection and tracking performance of a track-while-scan (TWS) radar. Part 1 presents a review of current status. In this part, Part 2, it is shown how the detection can be improved by utilising information from tracker. A new multitarget tracking algorithm, capable of tracking manoeuvring targets in clutter, is then presented. The algorithm is specifically tailored so that the solution to the combinatorial problem presented in a companion paper can be applied. The implementation aspects are discussed and a multiprocessor architecture identified to realise the full potential of the algorithm. Part 3 presents analytical derivations for quantitative assessment of the performance of the TWS radar system. It also shows how the performance can be optimised.
Resumo:
There is an increased interest on the use of UAVs for environmental research such as tracking bush fires, volcanic eruptions, chemical accidents or pollution sources. The aim of this paper is to describe the theory and results of a bio-inspired plume tracking algorithm. A method for generating sparse plumes in a virtual environment was also developed. Results indicated the ability of the algorithms to track plumes in 2D and 3D. The system has been tested with hardware in the loop (HIL) simulations and in flight using a CO2 gas sensor mounted to a multi-rotor UAV. The UAV is controlled by the plume tracking algorithm running on the ground control station (GCS).
Resumo:
Rarely is it possible to obtain absolute numbers in free-ranging populations and although various direct and indirect methods are used to estimate abundance, few are validated against populations of known size. In this paper, we apply grounding, calibration and verification methods, used to validate mathematical models, to methods of estimating relative abundance. To illustrate how this might be done, we consider and evaluate the widely applied passive tracking index (PTI) methodology. Using published data, we examine the rationality of PTI methodology, how conceptually animal activity and abundance are related and how alternative methods are subject to similar biases or produce similar abundance estimates and trends. We then attune the method against populations representing a range of densities likely to be encountered in the field. Finally, we compare PTI trends against a prediction that adjacent populations of the same species will have similar abundance values and trends in activity. We show that while PTI abundance estimates are subject to environmental and behavioural stochasticity peculiar to each species, the PTI method and associated variance estimate showed high probability of detection, high precision of abundance values and, generally, low variability between surveys, and suggest that the PTI method applied using this procedure and for these species provides a sensitive and credible index of abundance. This same or similar validation approach can and should be applied to alternative relative abundance methods in order to demonstrate their credibility and justify their use.
Resumo:
Topic detection and tracking (TDT) is an area of information retrieval research the focus of which revolves around news events. The problems TDT deals with relate to segmenting news text into cohesive stories, detecting something new, previously unreported, tracking the development of a previously reported event, and grouping together news that discuss the same event. The performance of the traditional information retrieval techniques based on full-text similarity has remained inadequate for online production systems. It has been difficult to make the distinction between same and similar events. In this work, we explore ways of representing and comparing news documents in order to detect new events and track their development. First, however, we put forward a conceptual analysis of the notions of topic and event. The purpose is to clarify the terminology and align it with the process of news-making and the tradition of story-telling. Second, we present a framework for document similarity that is based on semantic classes, i.e., groups of words with similar meaning. We adopt people, organizations, and locations as semantic classes in addition to general terms. As each semantic class can be assigned its own similarity measure, document similarity can make use of ontologies, e.g., geographical taxonomies. The documents are compared class-wise, and the outcome is a weighted combination of class-wise similarities. Third, we incorporate temporal information into document similarity. We formalize the natural language temporal expressions occurring in the text, and use them to anchor the rest of the terms onto the time-line. Upon comparing documents for event-based similarity, we look not only at matching terms, but also how near their anchors are on the time-line. Fourth, we experiment with an adaptive variant of the semantic class similarity system. The news reflect changes in the real world, and in order to keep up, the system has to change its behavior based on the contents of the news stream. We put forward two strategies for rebuilding the topic representations and report experiment results. We run experiments with three annotated TDT corpora. The use of semantic classes increased the effectiveness of topic tracking by 10-30\% depending on the experimental setup. The gain in spotting new events remained lower, around 3-4\%. The anchoring the text to a time-line based on the temporal expressions gave a further 10\% increase the effectiveness of topic tracking. The gains in detecting new events, again, remained smaller. The adaptive systems did not improve the tracking results.
Resumo:
Free and Open Source Software (FOSS) has gained increased interest in the computer software industry, but assessing its quality remains a challenge. FOSS development is frequently carried out by globally distributed development teams, and all stages of development are publicly visible. Several product and process-level quality factors can be measured using the public data. This thesis presents a theoretical background for software quality and metrics and their application in a FOSS environment. Information available from FOSS projects in three information spaces are presented, and a quality model suitable for use in a FOSS context is constructed. The model includes both process and product quality metrics, and takes into account the tools and working methods commonly used in FOSS projects. A subset of the constructed quality model is applied to three FOSS projects, highlighting both theoretical and practical concerns in implementing automatic metric collection and analysis. The experiment shows that useful quality information can be extracted from the vast amount of data available. In particular, projects vary in their growth rate, complexity, modularity and team structure.
Resumo:
This paper asks a new question: how we can use RFID technology in marketing products in supermarkets and how we can measure its performance or ROI (Return-on-Investment). We try to answer the question by proposing a simulation model whereby customers become aware of other customers' real-time shopping behavior and may hence be influenced by their purchases and the levels of purchases. The proposed model is orthogonal to sales model and can have the similar effects: increase in the overall shopping volume. Managers often struggle with the prediction of ROI on purchasing such a technology, this simulation sets to provide them the answers of questions like the percentage of increase in sales given real-time purchase information to other customers. The simulation is also flexible to incorporate any given model of customers' behavior tailored to particular supermarket, settings, events or promotions. The results, although preliminary, are promising to use RFID technology for marketing products in supermarkets and provide several dimensions to look for influencing customers via feedback, real-time marketing, target advertisement and on-demand promotions. Several other parameters have been discussed including the herd behavior, fake customers, privacy, and optimality of sales-price margin and the ROI of investing in RFID technology for marketing purposes. © 2010 Springer Science+Business Media B.V.