970 resultados para Projector-Camera system
Resumo:
In this paper we present an approach to build a prototype. model of a first-responder localization system intended for disaster relief operations. This system is useful to monitor and track the positions of the first-responders in an indoor environment, where GPS is not available. Each member of the first responder team is equipped with two zero-velocity-update-aided inertial navigation systems, one on each foot, a camera mounted on a helmet, and a processing platform strapped around the waist of the first responder, which fuses the data from the different sensors. The fusion algorithm runs real-time on the processing platform. The video is also processed using the DSP core of the computing machine. The processed data consisting of position, velocity, heading information along with video streams is transmitted to the command and control system via a local infrastructure WiFi network. A centralized cooperative localization algorithm, utilizing the information from Ultra Wideband based inter-agent ranging devices combined with the position estimates and uncertainties of each first responder, has also been implemented.
Resumo:
This thesis describes investigations of two classes of laboratory plasmas with rather different properties: partially ionized low pressure radiofrequency (RF) discharges, and fully ionized high density magnetohydrodynamically (MHD)-driven jets. An RF pre-ionization system was developed to enable neutral gas breakdown at lower pressures and create hotter, faster jets in the Caltech MHD-Driven Jet Experiment. The RF plasma source used a custom pulsed 3 kW 13.56 MHz RF power amplifier that was powered by AA batteries, allowing it to safely float at 4-6 kV with the cathode of the jet experiment. The argon RF discharge equilibrium and transport properties were analyzed, and novel jet dynamics were observed.
Although the RF plasma source was conceived as a wave-heated helicon source, scaling measurements and numerical modeling showed that inductive coupling was the dominant energy input mechanism. A one-dimensional time-dependent fluid model was developed to quantitatively explain the expansion of the pre-ionized plasma into the jet experiment chamber. The plasma transitioned from an ionizing phase with depressed neutral emission to a recombining phase with enhanced emission during the course of the experiment, causing fast camera images to be a poor indicator of the density distribution. Under certain conditions, the total visible and infrared brightness and the downstream ion density both increased after the RF power was turned off. The time-dependent emission patterns were used for an indirect measurement of the neutral gas pressure.
The low-mass jets formed with the aid of the pre-ionization system were extremely narrow and collimated near the electrodes, with peak density exceeding that of jets created without pre-ionization. The initial neutral gas distribution prior to plasma breakdown was found to be critical in determining the ultimate jet structure. The visible radius of the dense central jet column was several times narrower than the axial current channel radius, suggesting that the outer portion of the jet must have been force free, with the current parallel to the magnetic field. The studies of non-equilibrium flows and plasma self-organization being carried out at Caltech are relevant to astrophysical jets and fusion energy research.
Resumo:
We describe the application of two types of stereo camera systems in fisheries research, including the design, calibration, analysis techniques, and precision of the data obtained with these systems. The first is a stereo video system deployed by using a quick-responding winch with a live feed to provide species- and size- composition data adequate to produce acoustically based biomass estimates of rockfish. This system was tested on the eastern Bering Sea slope where rockfish were measured. Rockfish sizes were similar to those sampled with a bottom trawl and the relative error in multiple measurements of the same rockfish in multiple still-frame images was small. Measurement errors of up to 5.5% were found on a calibration target of known size. The second system consisted of a pair of still-image digital cameras mounted inside a midwater trawl. Processing of the stereo images allowed fish length, fish orientation in relation to the camera platform, and relative distance of the fish to the trawl netting to be determined. The video system was useful for surveying fish in Alaska, but it could also be used broadly in other situations where it is difficult to obtain species-composition or size-composition information. Likewise, the still-image system could be used for fisheries research to obtain data on size, position, and orientation of fish.
A holographic projection system with an electrically tuning and continuously adjustable optical zoom
Resumo:
A holographic projection system with optical zoom is demonstrated. By using a combination of a LC lens and an encoded Fresnel lens on the LCoS panel, we can control zoom in a holographic projector. The magnification can be electrically adjusted by tuning the focal length of the combination of the two lenses. The zoom ratio of the holographic projection system can reach 3.7:1 with continuous zoom function. The optical zoom function can decrease the complexity of the holographic projection system.
Resumo:
We present a system for augmenting depth camera output using multispectral photometric stereo. The technique is demonstrated using a Kinect sensor and is able to produce geometry independently for each frame. Improved reconstruction is demonstrated using the Kinect's inbuilt RGB camera and further improvements are achieved by introducing an additional high resolution camera. As well as qualitative improvements in reconstruction a quantitative reduction in temporal noise is shown. As part of the system an approach is presented for relaxing the assumption of multispectral photometric stereo that scenes are of constant chromaticity to the assumption that scenes contain multiple piecewise constant chromaticities.
Resumo:
The tunable liquid crystal (LC) lens designed for a holographic projection system is demonstrated. By using a single patterned electrode LC lens, a solid lens and an encoded Fresnel lens on the LCoS panel, we can maintain the image size of the holographic projector with different wavelengths (λ:674nm, 532nm and 445nm) . The zoom ratio of the holographic projection system depends on the lens power of the solid lens and the tunable lens power of the LC lens. The optical zoom function can help to solve the image size mismatching problem of the holographic projection system. © 2013 SPIE.
Resumo:
BACKGROUND: Despite the widespread use of sensors in engineering systems like robots and automation systems, the common paradigm is to have fixed sensor morphology tailored to fulfill a specific application. On the other hand, robotic systems are expected to operate in ever more uncertain environments. In order to cope with the challenge, it is worthy of note that biological systems show the importance of suitable sensor morphology and active sensing capability to handle different kinds of sensing tasks with particular requirements. METHODOLOGY: This paper presents a robotics active sensing system which is able to adjust its sensor morphology in situ in order to sense different physical quantities with desirable sensing characteristics. The approach taken is to use thermoplastic adhesive material, i.e. Hot Melt Adhesive (HMA). It will be shown that the thermoplastic and thermoadhesive nature of HMA enables the system to repeatedly fabricate, attach and detach mechanical structures with a variety of shape and size to the robot end effector for sensing purposes. Via active sensing capability, the robotic system utilizes the structure to physically probe an unknown target object with suitable motion and transduce the arising physical stimuli into information usable by a camera as its only built-in sensor. CONCLUSIONS/SIGNIFICANCE: The efficacy of the proposed system is verified based on two results. Firstly, it is confirmed that suitable sensor morphology and active sensing capability enables the system to sense different physical quantities, i.e. softness and temperature, with desirable sensing characteristics. Secondly, given tasks of discriminating two visually indistinguishable objects with respect to softness and temperature, it is confirmed that the proposed robotic system is able to autonomously accomplish them. The way the results motivate new research directions which focus on in situ adjustment of sensor morphology will also be discussed.
Resumo:
The characteristic of several night imaging and display technologies on cars are introduced. Compared with the current night vision technologies on cars, Range-gated technology can eliminate backscattered light and increase the SNR of system. The theory of range-gated image technology is described. The plan of range-gated system on cars is designed; the divergence angle of laser can be designed to change automatically, this allows overfilling of the camera field of view to effectively attenuate the laser when necessary. Safety range of the driver is calculated according to the theory analysis. Observation distance of the designed system is about 500m which is satisfied with the need of safety driver range.
Resumo:
A system for visual recognition is described, with implications for the general problem of representation of knowledge to assist control. The immediate objective is a computer system that will recognize objects in a visual scene, specifically hammers. The computer receives an array of light intensities from a device like a television camera. It is to locate and identify the hammer if one is present. The computer must produce from the numerical "sensory data" a symbolic description that constitutes its perception of the scene. Of primary concern is the control of the recognition process. Control decisions should be guided by the partial results obtained on the scene. If a hammer handle is observed this should suggest that the handle is part of a hammer and advise where to look for the hammer head. The particular knowledge that a handle has been found combines with general knowledge about hammers to influence the recognition process. This use of knowledge to direct control is denoted here by the term "active knowledge". A descriptive formalism is presented for visual knowledge which identifies the relationships relevant to the active use of the knowledge. A control structure is provided which can apply knowledge organized in this fashion actively to the processing of a given scene.
Resumo:
Poster is based on the following paper: C. Kwan and M. Betke. Camera Canvas: Image editing software for people with disabilities. In Proceedings of the 14th International Conference on Human Computer Interaction (HCI International 2011), Orlando, Florida, July 2011.
Resumo:
In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications: unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real-world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes that are caused, for instance, by columns or furniture in a room that can occlude potential camera views. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floorplans. Given a floorplan to be observed, the problem is then to efficiently compute a camera layout such that certain task-specific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In experiments the performance of the resulting system is demonstrated with different real floorplans.
Resumo:
An appearance-based framework for 3D hand shape classification and simultaneous camera viewpoint estimation is presented. Given an input image of a segmented hand, the most similar matches from a large database of synthetic hand images are retrieved. The ground truth labels of those matches, containing hand shape and camera viewpoint information, are returned by the system as estimates for the input image. Database retrieval is done hierarchically, by first quickly rejecting the vast majority of all database views, and then ranking the remaining candidates in order of similarity to the input. Four different similarity measures are employed, based on edge location, edge orientation, finger location and geometric moments.
Resumo:
Camera Canvas is an image editing software package for users with severe disabilities that limit their mobility. It is specially designed for Camera Mouse, a camera-based mouse-substitute input system. Users can manipulate images through various head movements, tracked by Camera Mouse. The system is also fully usable with traditional mouse or touch-pad input. Designing the system, we studied the requirements and solutions for image editing and content creation using Camera Mouse. Experiments with 20 subjects, each testing Camera Canvas with Camera Mouse as the input mechanism, showed that users found the software easy to understand and operate. User feedback was taken into account to make the software more usable and the interface more intuitive. We suggest that the Camera Canvas software makes important progress in providing a new medium of utility and creativity in computing for users with severe disabilities.
Resumo:
snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.
Resumo:
Intelligent assistive technology can greatly improve the daily lives of people with severe paralysis, who have limited communication abilities. People with motion impairments often prefer camera-based communication interfaces, because these are customizable, comfortable, and do not require user-borne accessories that could draw attention to their disability. We present an overview of assistive software that we specifically designed for camera-based interfaces such as the Camera Mouse, which serves as a mouse-replacement input system. The applications include software for text-entry, web browsing, image editing, animation, and music therapy. Using this software, people with severe motion impairments can communicate with friends and family and have a medium to explore their creativity.