972 resultados para AVT Prosilica GC2450C camera system
Resumo:
A system for visual recognition is described, with implications for the general problem of representation of knowledge to assist control. The immediate objective is a computer system that will recognize objects in a visual scene, specifically hammers. The computer receives an array of light intensities from a device like a television camera. It is to locate and identify the hammer if one is present. The computer must produce from the numerical "sensory data" a symbolic description that constitutes its perception of the scene. Of primary concern is the control of the recognition process. Control decisions should be guided by the partial results obtained on the scene. If a hammer handle is observed this should suggest that the handle is part of a hammer and advise where to look for the hammer head. The particular knowledge that a handle has been found combines with general knowledge about hammers to influence the recognition process. This use of knowledge to direct control is denoted here by the term "active knowledge". A descriptive formalism is presented for visual knowledge which identifies the relationships relevant to the active use of the knowledge. A control structure is provided which can apply knowledge organized in this fashion actively to the processing of a given scene.
Resumo:
Poster is based on the following paper: C. Kwan and M. Betke. Camera Canvas: Image editing software for people with disabilities. In Proceedings of the 14th International Conference on Human Computer Interaction (HCI International 2011), Orlando, Florida, July 2011.
Resumo:
In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications: unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real-world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes that are caused, for instance, by columns or furniture in a room that can occlude potential camera views. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floorplans. Given a floorplan to be observed, the problem is then to efficiently compute a camera layout such that certain task-specific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In experiments the performance of the resulting system is demonstrated with different real floorplans.
Resumo:
An appearance-based framework for 3D hand shape classification and simultaneous camera viewpoint estimation is presented. Given an input image of a segmented hand, the most similar matches from a large database of synthetic hand images are retrieved. The ground truth labels of those matches, containing hand shape and camera viewpoint information, are returned by the system as estimates for the input image. Database retrieval is done hierarchically, by first quickly rejecting the vast majority of all database views, and then ranking the remaining candidates in order of similarity to the input. Four different similarity measures are employed, based on edge location, edge orientation, finger location and geometric moments.
Resumo:
Camera Canvas is an image editing software package for users with severe disabilities that limit their mobility. It is specially designed for Camera Mouse, a camera-based mouse-substitute input system. Users can manipulate images through various head movements, tracked by Camera Mouse. The system is also fully usable with traditional mouse or touch-pad input. Designing the system, we studied the requirements and solutions for image editing and content creation using Camera Mouse. Experiments with 20 subjects, each testing Camera Canvas with Camera Mouse as the input mechanism, showed that users found the software easy to understand and operate. User feedback was taken into account to make the software more usable and the interface more intuitive. We suggest that the Camera Canvas software makes important progress in providing a new medium of utility and creativity in computing for users with severe disabilities.
Resumo:
snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.
Resumo:
Intelligent assistive technology can greatly improve the daily lives of people with severe paralysis, who have limited communication abilities. People with motion impairments often prefer camera-based communication interfaces, because these are customizable, comfortable, and do not require user-borne accessories that could draw attention to their disability. We present an overview of assistive software that we specifically designed for camera-based interfaces such as the Camera Mouse, which serves as a mouse-replacement input system. The applications include software for text-entry, web browsing, image editing, animation, and music therapy. Using this software, people with severe motion impairments can communicate with friends and family and have a medium to explore their creativity.
Resumo:
In this project we design and implement a centralized hashing table in the snBench sensor network environment. We discuss the feasibility of this approach and compare and contrast with the distributed hashing architecture, with particular discussion regarding the conditions under which a centralized architecture makes sense. There are numerous computational tasks that require persistence of data in a sensor network environment. To help motivate the need for data storage in snBench we demonstrate a practical application of the technology whereby a video camera can monitor a room to detect the presence of a person and send an alert to the appropriate authorities.
Resumo:
Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.
Resumo:
Wireless Inertial Measurement Units (WIMUs) combine motion sensing, processing & communications functionsin a single device. Data gathered using these sensors has the potential to be converted into high quality motion data. By outfitting a subject with multiple WIMUs full motion data can begathered. With a potential cost of ownership several orders of magnitude less than traditional camera based motion capture, WIMU systems have potential to be crucially important in supplementing or replacing traditional motion capture and opening up entirely new application areas and potential markets particularly in the rehabilitative, sports & at-home healthcarespaces. Currently WIMUs are underutilized in these areas. A major barrier to adoption is perceived complexity. Sample rates, sensor types & dynamic sensor ranges may need to be adjusted on multiple axes for each device depending on the scenario. As such we present an advanced WIMU in conjunction with a Smart WIMU system to simplify this aspect with 3 usage modes: Manual, Intelligent and Autonomous. Attendees will be able to compare the 3 different modes and see the effects of good andbad set-ups on the quality of data gathered in real time.
Resumo:
On-board image guidance, such as cone-beam CT (CBCT) and kV/MV 2D imaging, is essential in many radiation therapy procedures, such as intensity modulated radiotherapy (IMRT) and stereotactic body radiation therapy (SBRT). These imaging techniques provide predominantly anatomical information for treatment planning and target localization. Recently, studies have shown that treatment planning based on functional and molecular information about the tumor and surrounding tissue could potentially improve the effectiveness of radiation therapy. However, current on-board imaging systems are limited in their functional and molecular imaging capability. Single Photon Emission Computed Tomography (SPECT) is a candidate to achieve on-board functional and molecular imaging. Traditional SPECT systems typically take 20 minutes or more for a scan, which is too long for on-board imaging. A robotic multi-pinhole SPECT system was proposed in this dissertation to provide shorter imaging time by using a robotic arm to maneuver the multi-pinhole SPECT system around the patient in position for radiation therapy.
A 49-pinhole collimated SPECT detector and its shielding were designed and simulated in this work using the computer-aided design (CAD) software. The trajectories of robotic arm about the patient, treatment table and gantry in the radiation therapy room and several detector assemblies such as parallel holes, single pinhole and 49 pinholes collimated detector were investigated. The rail mounted system was designed to enable a full range of detector positions and orientations to various crucial treatment sites including head and torso, while avoiding collision with linear accelerator (LINAC), patient table and patient.
An alignment method was developed in this work to calibrate the on-board robotic SPECT to the LINAC coordinate frame and to the coordinate frames of other on-board imaging systems such as CBCT. This alignment method utilizes line sources and one pinhole projection of these line sources. The model consists of multiple alignment parameters which maps line sources in 3-dimensional (3D) space to their 2-dimensional (2D) projections on the SPECT detector. Computer-simulation studies and experimental evaluations were performed as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise and acquisition geometry. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, the six alignment parameters (3 translational and 3 rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by Radon transform, the estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution and detector acquisition geometry. The estimation accuracy was significantly improved by using 4 line sources rather than 3 and also by using thinner line-source projections (obtained by better intrinsic detector resolution). With 5 line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.
Simulation studies were performed to investigate the improvement of imaging sensitivity and accuracy of hot sphere localization for breast imaging of patients in prone position. A 3D XCAT phantom was simulated in the prone position with nine hot spheres of 10 mm diameter added in the left breast. A no-treatment-table case and two commercial prone breast boards, 7 and 24 cm thick, were simulated. Different pinhole focal lengths were assessed for root-mean-square-error (RMSE). The pinhole focal lengths resulting in the lowest RMSE values were 12 cm, 18 cm and 21 cm for no table, thin board, and thick board, respectively. In both no table and thin board cases, all 9 hot spheres were easily visualized above background with 4-minute scans utilizing the 49-pinhole SPECT system while seven of nine hot spheres were visible with the thick board. In comparison with parallel-hole system, our 49-pinhole system shows reduction in noise and bias under these simulation cases. These results correspond to smaller radii of rotation for no-table case and thinner prone board. Similarly, localization accuracy with the 49-pinhole system was significantly better than with the parallel-hole system for both the thin and thick prone boards. Median localization errors for the 49-pinhole system with the thin board were less than 3 mm for 5 of 9 hot spheres, and less than 6 mm for the other 4 hot spheres. Median localization errors of 49-pinhole system with the thick board were less than 4 mm for 5 of 9 hot spheres, and less than 8 mm for the other 4 hot spheres.
Besides prone breast imaging, respiratory-gated region-of-interest (ROI) imaging of lung tumor was also investigated. A simulation study was conducted on the potential of multi-pinhole, region-of-interest (ROI) SPECT to alleviate noise effects associated with respiratory-gated SPECT imaging of the thorax. Two 4D XCAT digital phantoms were constructed, with either a 10 mm or 20 mm diameter tumor added in the right lung. The maximum diaphragm motion was 2 cm (for 10 mm tumor) or 4 cm (for 20 mm tumor) in superior-inferior direction and 1.2 cm in anterior-posterior direction. Projections were simulated with a 4-minute acquisition time (40 seconds per each of 6 gates) using either the ROI SPECT system (49-pinhole) or reference single and dual conventional broad cross-section, parallel-hole collimated SPECT. The SPECT images were reconstructed using OSEM with up to 6 iterations. Images were evaluated as a function of gate by profiles, noise versus bias curves, and a numerical observer performing a forced-choice localization task. Even for the 20 mm tumor, the 49-pinhole imaging ROI was found sufficient to encompass fully usual clinical ranges of diaphragm motion. Averaged over the 6 gates, noise at iteration 6 of 49-pinhole ROI imaging (10.9 µCi/ml) was approximately comparable to noise at iteration 2 of the two dual and single parallel-hole, broad cross-section systems (12.4 µCi/ml and 13.8 µCi/ml, respectively). Corresponding biases were much lower for the 49-pinhole ROI system (3.8 µCi/ml), versus 6.2 µCi/ml and 6.5 µCi/ml for the dual and single parallel-hole systems, respectively. Median localization errors averaged over 6 gates, for the 10 mm and 20 mm tumors respectively, were 1.6 mm and 0.5 mm using the ROI imaging system and 6.6 mm and 2.3 mm using the dual parallel-hole, broad cross-section system. The results demonstrate substantially improved imaging via ROI methods. One important application may be gated imaging of patients in position for radiation therapy.
A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150-L110 robot). An imaging study was performed with a phantom (PET CT Phantom
In conclusion, the proposed on-board robotic SPECT can be aligned to LINAC/CBCT with a single pinhole projection of the line-source phantom. Alignment parameters can be estimated using one pinhole projection of line sources. This alignment method may be important for multi-pinhole SPECT, where relative pinhole alignment may vary during rotation. For single pinhole and multi-pinhole SPECT imaging onboard radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. In simulation studies of prone breast imaging and respiratory-gated lung imaging, the 49-pinhole detector showed better tumor contrast recovery and localization in a 4-minute scan compared to parallel-hole detector. On-board SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction.
Resumo:
The WASP project and infrastructure supporting the SuperWASP Facility are described. As the instrument, reduction pipeline and archive system are now fully operative we expect the system to have a major impact in the discovery of bright exo-planet candidates as well in more general variable star projects.
Resumo:
A Thomson scattering system has been installed at the Tokyo electron beam ion trap for probing characteristics of the electron beam. A YVO4 green laser beam was injected antiparallel to the electron beam. The image of the Thomson scattering light from the electron beam has been observed using a charged-coupled device camera. By using a combination of interference filters, the spectral distribution of the Thomson scattering light has been measured. The Doppler shift observed for the scattered light is consistent with the beam energy. The beam radius dependence was investigated as a function of the beam energy, the beam current, and the magnetic field at the trap region. The variation of the measured beam radius against the beam current and the magnetic field were similar to those in Herrmann's prediction. The beam radius as a function of the beam energy was also similar to Herrmann's prediction but seemed to become larger at low energy. (C) 2002 American Institute of Physics.
Resumo:
Utilising cameras as a means to survey the surrounding environment is becoming increasingly popular in a number of different research areas and applications. Central to using camera sensors as input to a vision system, is the need to be able to manipulate and process the information captured in these images. One such application, is the use of cameras to monitor the quality of airport landing lighting at aerodromes where a camera is placed inside an aircraft and used to record images of the lighting pattern during the landing phase of a flight. The images are processed to determine a performance metric. This requires the development of custom software for the localisation and identification of luminaires within the image data. However, because of the necessity to keep airport operations functioning as efficiently as possible, it is difficult to collect enough image data to develop, test and validate any developed software. In this paper, we present a technique to model a virtual landing lighting pattern. A mathematical model is postulated which represents the glide path of the aircraft including random deviations from the expected path. A morphological method has been developed to localise and track the luminaires under different operating conditions. © 2011 IEEE.
Resumo:
In this paper, we present a unique cross-layer design framework that allows systematic exploration of the energy-delay-quality trade-offs at the algorithm, architecture and circuit level of design abstraction for each block of a system. In addition, taking into consideration the interactions between different sub-blocks of a system, it identifies the design solutions that can ensure the least energy at the "right amount of quality" for each sub-block/system under user quality/delay constraints. This is achieved by deriving sensitivity based design criteria, the balancing of which form the quantitative relations that can be used early in the system design process to evaluate the energy efficiency of various design options. The proposed framework when applied to the exploration of energy-quality design space of the main blocks of a digital camera and a wireless receiver, achieves 58% and 33% energy savings under 41% and 20% error increase, respectively. © 2010 ACM.