24 resultados para multiple object tracking
Resumo:
We present a practical approach to Natural Language Generation (NLG) for spoken dialogue systems. The approach is based on small template fragments (mini-templates). The system’s object architecture facilitates generation of phrases across pre-defined business domains and registers, as well as into different languages. The architecture simplifies NLG in well-understood application contexts, while providing the flexibility for a developer and for the system, to vary linguistic output according to dialogue context, including any intended affective impact. Mini-templates are used with a suite of domain term objects, resulting in an NLG system (MINTGEN – MINi-Template GENerator) whose extensibility and ease of maintenance is enhanced by the sparsity of information devoted to individual domains. The system also avoids the need for specialist linguistic competence on the part of the system maintainer.
Resumo:
Magnetic bright points (MBPs) in the internetwork are among the smallest objects in the solar photosphere and appear bright against the ambient environment. An algorithm is presented that can be used for the automated detection of the MBPs in the spatial and temporal domains. The algorithm works by mapping the lanes through intensity thresholding. A compass search, combined with a study of the intensity gradient across the detected objects, allows the disentanglement of MBPs from bright pixels within the granules. Object growing is implemented to account for any pixels that might have been removed when mapping the lanes. The images are stabilized by locating long-lived objects that may have been missed due to variable light levels and seeing quality. Tests of the algorithm, employing data taken with the Swedish Solar Telescope, reveal that approximate to 90 per cent of MBPs within a 75 x 75 arcsec(2) field of view are detected.
Resumo:
To date, the processing of wildlife location data has relied on a diversity of software and file formats. Data management and the following spatial and statistical analyses were undertaken in multiple steps, involving many time-consuming importing/exporting phases. Recent technological advancements in tracking systems have made large, continuous, high-frequency datasets of wildlife behavioral data available, such as those derived from the global positioning system (GPS) and other animal-attached sensor devices. These data can be further complemented by a wide range of other information about the animals’ environment. Management of these large and diverse datasets for modelling animal behaviour and ecology can prove challenging, slowing down analysis and increasing the probability of mistakes in data handling. We address these issues by critically evaluating the requirements for good management of GPS data for wildlife biology. We highlight that dedicated data management tools and expertise are needed. We explore current research in wildlife data management. We suggest a general direction of development, based on a modular software architecture with a spatial database at its core, where interoperability, data model design and integration with remote-sensing data sources play an important role in successful GPS data handling.
Resumo:
We propose a complete application capable of tracking multiple objects in an environment monitored by multiple cameras. The system has been specially developed to be applied to sport games, and it has been evaluated in a real association-football stadium. Each target is tracked using a local importance-sampling particle filter in each camera, but the final estimation is made by combining information from the other cameras using a modified unscented Kalman filter algorithm. Multicamera integration enables us to compensate for bad measurements or occlusions in some cameras thanks to the other views it offers. The final algorithm results in a more accurate system with a lower failure rate. (C) 2009 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.3114605]
Resumo:
In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. Firstly, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a Probability color Density Image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(z|x) using the Integral Image of the PDI.
Resumo:
Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.
Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.
Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.
Resumo:
There is a perception amongst some of those learning computer programming that the principles of object-oriented programming (where behaviour is often encapsulated across multiple class files) can be difficult to grasp, especially when taught through a traditional, didactic ‘talk-and-chalk’ method or in a lecture-based environment.
We propose a non-traditional teaching method, developed for a government funded teaching training project delivered by Queen’s University, we call it bigCode. In this scenario, learners are provided with many printed, poster-sized fragments of code (in this case either Java or C#). The learners sit on the floor in groups and assemble these fragments into the many classes which make-up an object-oriented program.
Early trials indicate that bigCode is an effective method for teaching object-orientation. The requirement to physically organise the code fragments imitates closely the thought processes of a good software developer when developing object-oriented code.
Furthermore, in addition to teaching the principles involved in object-orientation, bigCode is also an extremely useful technique for teaching learners the organisation and structure of individual classes in Java or C# (as well as the organisation of procedural code). The mechanics of organising fragments of code into complete, correct computer programs give the users first-hand practice of this important skill, and as a result they subsequently find it much easier to develop well-structured code on a computer.
Yet, open questions remain. Is bigCode successful only because we have unknowingly predominantly targeted kinesthetic learners? Is bigCode also an effective teaching approach for other forms of learners, such as visual learners? How scalable is bigCode: in its current form can it be used with large class sizes, or outside the classroom?
Resumo:
We present the Coordinated Synoptic Investigation of NGC 2264, a continuous 30 day multi-wavelength photometric monitoring campaign on more than 1000 young cluster members using 16 telescopes. The unprecedented combination of multi-wavelength, high-precision, high-cadence, and long-duration data opens a new window into the time domain behavior of young stellar objects. Here we provide an overview of the observations, focusing on results from Spitzer and CoRoT. The highlight of this work is detailed analysis of 162 classical T Tauri stars for which we can probe optical and mid-infrared flux variations to 1% amplitudes and sub-hour timescales. We present a morphological variability census and then use metrics of periodicity, stochasticity, and symmetry to statistically separate the light curves into seven distinct classes, which we suggest represent different physical processes and geometric effects. We provide distributions of the characteristic timescales and amplitudes and assess the fractional representation within each class. The largest category (>20%) are optical "dippers" with discrete fading events lasting ~1-5 days. The degree of correlation between the optical and infrared light curves is positive but weak; notably, the independently assigned optical and infrared morphology classes tend to be different for the same object. Assessment of flux variation behavior with respect to (circum)stellar properties reveals correlations of variability parameters with Hα emission and with effective temperature. Overall, our results point to multiple origins of young star variability, including circumstellar obscuration events, hot spots on the star and/or disk, accretion bursts, and rapid structural changes in the inner disk. Based on data from the Spitzer and CoRoT missions. The CoRoT space mission was developed and is operated by the French space agency CNES, with participation of ESA's RSSD and Science Programmes, Austria, Belgium, Brazil, Germany, and Spain.
Resumo:
In this work, we propose a biologically inspired appearance model for robust visual tracking. Motivated in part by the success of the hierarchical organization of the primary visual cortex (area V1), we establish an architecture consisting of five layers: whitening, rectification, normalization, coding and polling. The first three layers stem from the models developed for object recognition. In this paper, our attention focuses on the coding and pooling layers. In particular, we use a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations. An extensive experimental study shows that the proposed method has higher tracking accuracy than several state-of-the-art trackers.