989 resultados para automatic virtual camera


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prawns are a substantial Australian resource but presently are processed in a very labour-intensive manner. A prototype system has been developed for automatically grading and packing prawns into single-layer 'consumer packs' in which each prawn is approximately straight and has the same orientation. The novel technology includes a machine vision system that has been specially programmed to calculate relevant parameters at high speed and a gripper mechanism that can acquire, straighten and place prawns of various sizes. The system can be implemented on board a trawler or in an onshore processing facility. © 1993.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a moving virtual fence algorithm for herding cows. Each animal in the herd is given a smart collar consisting of a GPS, PDA, wireless networking and a sound amplifier. Using the GPS, the animal's location can be verified relative to the fence boundary. When approaching the perimeter, the animal is presented with a sound stimulus whose effect is to move away. We have developed the virtual fence control algorithm for moving a herd. We present simulation results and data from experiments with 8 cows equipped with smart collars.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a biologically inspired approach to vision-only simultaneous localization and mapping (SLAM) on ground-based platforms. The core SLAM system, dubbed RatSLAM, is based on computational models of the rodent hippocampus, and is coupled with a lightweight vision system that provides odometry and appearance information. RatSLAM builds a map in an online manner, driving loop closure and relocalization through sequences of familiar visual scenes. Visual ambiguity is managed by maintaining multiple competing vehicle pose estimates, while cumulative errors in odometry are corrected after loop closure by a map correction algorithm. We demonstrate the mapping performance of the system on a 66 km car journey through a complex suburban road network. Using only a web camera operating at 10 Hz, RatSLAM generates a coherent map of the entire environment at real-time speed, correctly closing more than 51 loops of up to 5 km in length.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration of movement tracking systems is a difficult problem faced by both animals and robots. The ability to continuously calibrate changing systems is essential for animals as they grow or are injured, and highly desirable for robot control or mapping systems due to the possibility of component wear, modification, damage and their deployment on varied robotic platforms. In this paper we use inspiration from the animal head direction tracking system to implement a self-calibrating, neurally-based robot orientation tracking system. Using real robot data we demonstrate how the system can remove tracking drift and learn to consistently track rotation over a large range of velocities. The neural tracking system provides the first steps towards a fully neural SLAM system with improved practical applicability through selftuning and adaptation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simultaneous Localization And Mapping (SLAM) is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision only approaches. This paper presents a method for generating approximate rotational and translation velocity information from a single vehicle-mounted consumer camera, without the computationally expensive process of tracking landmarks. The method is tested by employing it to provide the odometric and visual information for the RatSLAM system while mapping a complex suburban road network. RatSLAM generates a coherent map of the environment during an 18 km long trip through suburban traffic at speeds of up to 60 km/hr. This result demonstrates the potential of ground based vision-only SLAM using low cost sensing and computational hardware.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acoustically, car cabins are extremely noisy and as a consequence, existing audio-only speech recognition systems, for voice-based control of vehicle functions such as the GPS based navigator, perform poorly. Audio-only speech recognition systems fail to make use of the visual modality of speech (eg: lip movements). As the visual modality is immune to acoustic noise, utilising this visual information in conjunction with an audio only speech recognition system has the potential to improve the accuracy of the system. The field of recognising speech using both auditory and visual inputs is known as Audio Visual Speech Recognition (AVSR). Continuous research in AVASR field has been ongoing for the past twenty-five years with notable progress being made. However, the practical deployment of AVASR systems for use in a variety of real-world applications has not yet emerged. The main reason is due to most research to date neglecting to address variabilities in the visual domain such as illumination and viewpoint in the design of the visual front-end of the AVSR system. In this paper we present an AVASR system in a real-world car environment using the AVICAR database [1], which is publicly available in-car database and we show that the use of visual speech conjunction with the audio modality is a better approach to improve the robustness and effectiveness of voice-only recognition systems in car cabin environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an automated system for 3D assembly of tissue engineering (TE) scaffolds made from biocompatible microscopic building blocks with relatively large fabrication error. It focuses on the pin-into-hole force control developed for this demanding microassembly task. A beam-like gripper with integrated force sensing at a 3 mN resolution with a 500 mN measuring range is designed, and is used to implement an admittance force-controlled insertion using commercial precision stages. Visual-based alignment followed by an insertion is complemented by a haptic exploration strategy using force and position information. The system demonstrates fully automated construction of TE scaffolds with 50 microparts whose dimension error is larger than 5%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Java programming language has potentially significant advantages for wireless sensor nodes but there is currently no feature-rich, open source virtual machine available. In this paper we present Darjeeling, a system comprising offline tools and a memory efficient run-time. The offline post-compiler tool analyzes, links and consolidates Java class files into loadable modules. The runtime implements a modified Java VM that supports multithreading and is designed specifically to operate in constrained execution environments such as wireless sensor network nodes and supports inheritance, threads, garbage collection, and loadable modules. We have demonstrated Java running on AVR128 and MSP430 microcontrollers at speeds of up to 70,000 JVM instructions per second.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes the use of optical flow from a moving robot to provide force feedback to an operator’s joystick to facilitate collision free teleoperation. Optical flow is measured by a pair of wide angle cameras on board the vehicle and used to generate a virtual environmental force that is reflected to the user through the joystick, as well as feeding back into the control of the vehicle. We show that the proposed control is dissipative and prevents the vehicle colliding with the environment as well as providing the operator with a natural feel for the remote environment. Experimental results are provided on the InsectBot holonomic vehicle platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Java programming language enjoys widespread popularity on platforms ranging from servers to mobile phones. While efforts have been made to run Java on microcontroller platforms, there is currently no feature-rich, open source virtual machine available. In this paper we present Darjeeling, a system comprising offline tools and a memory efficient runtime. The offline post-compiler tool analyzes, links and consolidates Java class files into loadable modules. The runtime implements a modified Java VM that supports multithreading and is designed specifically to operate in constrained execution environments such as wireless sensor network nodes. Darjeeling improves upon existing work by supporting inheritance, threads, garbage collection, and loadable modules while keeping memory usage to a minimum. We have demonstrated Java running on AVR128 and MSP430 micro-controllers at speeds of up to 70,000 JVM instructions per second.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Managing livestock movement in extensive systems has environmental and production benefits. Currently permanent wire fencing is used to control cattle; this is both expensive and inflexible. Cattle are known to respond to auditory and visual cues and we investigated whether these can be used to manipulate their behaviour. Twenty-five Belmont Red steers with a mean live weight of 270kg were each randomly assigned to one of five treatments. Treatments consisted of a combination of cues (audio, tactile and visual stimuli) and consequence (electrical stimulation). The treatments were electrical stimulation alone, audio plus electrical stimulation, vibration plus electrical stimulation, light plus electrical stimulation and electrified electric fence (6kV) plus electrical stimulation. Cue stimuli were administered for 3s followed immediately by electrical stimulation (consequence) of 1kV for 1s. The experiment tested the operational efficacy of an on-animal control or virtual fencing system. A collar-halter device was designed to carry the electronics, batteries and equipment providing the stimuli, including audio, vibration, light and electrical of a prototype virtual fencing device. Cattle were allowed to travel along a 40m alley to a group of peers and feed while their rate of travel and response to the stimuli were recorded. The prototype virtual fencing system was successful in modifying the behaviour of the cattle. The rate of travel of cattle along the alley demonstrated the large variability in behavioural response associated with tactile, visual and audible cues. The experiment demonstrated virtual fencing has potential for controlling cattle in extensive grazing systems. However, larger numbers of cattle need to be tested to derive a better understanding of the behavioural variance. Further controlled experimental work is also necessary to quantify the interaction between cues, consequences and cattle learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers the question of designing a fully image-based visual servo control for a class of dynamic systems. The work is motivated by the ongoing development of image-based visual servo control of small aerial robotic vehicles. The kinematics and dynamics of a rigid-body dynamical system (such as a vehicle airframe) maneuvering over a flat target plane with observable features are expressed in terms of an unnormalized spherical centroid and an optic flow measurement. The image-plane dynamics with respect to force input are dependent on the height of the camera above the target plane. This dependence is compensated by introducing virtual height dynamics and adaptive estimation in the proposed control. A fully nonlinear adaptive control design is provided that ensures asymptotic stability of the closed-loop system for all feasible initial conditions. The choice of control gains is based on an analysis of the asymptotic dynamics of the system. Results from a realistic simulation are presented that demonstrate the performance of the closed-loop system. To the author's knowledge, this paper documents the first time that an image-based visual servo control has been proposed for a dynamic system using vision measurement for both position and velocity.