970 resultados para Projector-Camera system
Resumo:
In this project we design and implement a centralized hashing table in the snBench sensor network environment. We discuss the feasibility of this approach and compare and contrast with the distributed hashing architecture, with particular discussion regarding the conditions under which a centralized architecture makes sense. There are numerous computational tasks that require persistence of data in a sensor network environment. To help motivate the need for data storage in snBench we demonstrate a practical application of the technology whereby a video camera can monitor a room to detect the presence of a person and send an alert to the appropriate authorities.
Resumo:
Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.
Resumo:
Wireless Inertial Measurement Units (WIMUs) combine motion sensing, processing & communications functionsin a single device. Data gathered using these sensors has the potential to be converted into high quality motion data. By outfitting a subject with multiple WIMUs full motion data can begathered. With a potential cost of ownership several orders of magnitude less than traditional camera based motion capture, WIMU systems have potential to be crucially important in supplementing or replacing traditional motion capture and opening up entirely new application areas and potential markets particularly in the rehabilitative, sports & at-home healthcarespaces. Currently WIMUs are underutilized in these areas. A major barrier to adoption is perceived complexity. Sample rates, sensor types & dynamic sensor ranges may need to be adjusted on multiple axes for each device depending on the scenario. As such we present an advanced WIMU in conjunction with a Smart WIMU system to simplify this aspect with 3 usage modes: Manual, Intelligent and Autonomous. Attendees will be able to compare the 3 different modes and see the effects of good andbad set-ups on the quality of data gathered in real time.
Resumo:
On-board image guidance, such as cone-beam CT (CBCT) and kV/MV 2D imaging, is essential in many radiation therapy procedures, such as intensity modulated radiotherapy (IMRT) and stereotactic body radiation therapy (SBRT). These imaging techniques provide predominantly anatomical information for treatment planning and target localization. Recently, studies have shown that treatment planning based on functional and molecular information about the tumor and surrounding tissue could potentially improve the effectiveness of radiation therapy. However, current on-board imaging systems are limited in their functional and molecular imaging capability. Single Photon Emission Computed Tomography (SPECT) is a candidate to achieve on-board functional and molecular imaging. Traditional SPECT systems typically take 20 minutes or more for a scan, which is too long for on-board imaging. A robotic multi-pinhole SPECT system was proposed in this dissertation to provide shorter imaging time by using a robotic arm to maneuver the multi-pinhole SPECT system around the patient in position for radiation therapy.
A 49-pinhole collimated SPECT detector and its shielding were designed and simulated in this work using the computer-aided design (CAD) software. The trajectories of robotic arm about the patient, treatment table and gantry in the radiation therapy room and several detector assemblies such as parallel holes, single pinhole and 49 pinholes collimated detector were investigated. The rail mounted system was designed to enable a full range of detector positions and orientations to various crucial treatment sites including head and torso, while avoiding collision with linear accelerator (LINAC), patient table and patient.
An alignment method was developed in this work to calibrate the on-board robotic SPECT to the LINAC coordinate frame and to the coordinate frames of other on-board imaging systems such as CBCT. This alignment method utilizes line sources and one pinhole projection of these line sources. The model consists of multiple alignment parameters which maps line sources in 3-dimensional (3D) space to their 2-dimensional (2D) projections on the SPECT detector. Computer-simulation studies and experimental evaluations were performed as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise and acquisition geometry. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, the six alignment parameters (3 translational and 3 rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by Radon transform, the estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution and detector acquisition geometry. The estimation accuracy was significantly improved by using 4 line sources rather than 3 and also by using thinner line-source projections (obtained by better intrinsic detector resolution). With 5 line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.
Simulation studies were performed to investigate the improvement of imaging sensitivity and accuracy of hot sphere localization for breast imaging of patients in prone position. A 3D XCAT phantom was simulated in the prone position with nine hot spheres of 10 mm diameter added in the left breast. A no-treatment-table case and two commercial prone breast boards, 7 and 24 cm thick, were simulated. Different pinhole focal lengths were assessed for root-mean-square-error (RMSE). The pinhole focal lengths resulting in the lowest RMSE values were 12 cm, 18 cm and 21 cm for no table, thin board, and thick board, respectively. In both no table and thin board cases, all 9 hot spheres were easily visualized above background with 4-minute scans utilizing the 49-pinhole SPECT system while seven of nine hot spheres were visible with the thick board. In comparison with parallel-hole system, our 49-pinhole system shows reduction in noise and bias under these simulation cases. These results correspond to smaller radii of rotation for no-table case and thinner prone board. Similarly, localization accuracy with the 49-pinhole system was significantly better than with the parallel-hole system for both the thin and thick prone boards. Median localization errors for the 49-pinhole system with the thin board were less than 3 mm for 5 of 9 hot spheres, and less than 6 mm for the other 4 hot spheres. Median localization errors of 49-pinhole system with the thick board were less than 4 mm for 5 of 9 hot spheres, and less than 8 mm for the other 4 hot spheres.
Besides prone breast imaging, respiratory-gated region-of-interest (ROI) imaging of lung tumor was also investigated. A simulation study was conducted on the potential of multi-pinhole, region-of-interest (ROI) SPECT to alleviate noise effects associated with respiratory-gated SPECT imaging of the thorax. Two 4D XCAT digital phantoms were constructed, with either a 10 mm or 20 mm diameter tumor added in the right lung. The maximum diaphragm motion was 2 cm (for 10 mm tumor) or 4 cm (for 20 mm tumor) in superior-inferior direction and 1.2 cm in anterior-posterior direction. Projections were simulated with a 4-minute acquisition time (40 seconds per each of 6 gates) using either the ROI SPECT system (49-pinhole) or reference single and dual conventional broad cross-section, parallel-hole collimated SPECT. The SPECT images were reconstructed using OSEM with up to 6 iterations. Images were evaluated as a function of gate by profiles, noise versus bias curves, and a numerical observer performing a forced-choice localization task. Even for the 20 mm tumor, the 49-pinhole imaging ROI was found sufficient to encompass fully usual clinical ranges of diaphragm motion. Averaged over the 6 gates, noise at iteration 6 of 49-pinhole ROI imaging (10.9 µCi/ml) was approximately comparable to noise at iteration 2 of the two dual and single parallel-hole, broad cross-section systems (12.4 µCi/ml and 13.8 µCi/ml, respectively). Corresponding biases were much lower for the 49-pinhole ROI system (3.8 µCi/ml), versus 6.2 µCi/ml and 6.5 µCi/ml for the dual and single parallel-hole systems, respectively. Median localization errors averaged over 6 gates, for the 10 mm and 20 mm tumors respectively, were 1.6 mm and 0.5 mm using the ROI imaging system and 6.6 mm and 2.3 mm using the dual parallel-hole, broad cross-section system. The results demonstrate substantially improved imaging via ROI methods. One important application may be gated imaging of patients in position for radiation therapy.
A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150-L110 robot). An imaging study was performed with a phantom (PET CT Phantom
In conclusion, the proposed on-board robotic SPECT can be aligned to LINAC/CBCT with a single pinhole projection of the line-source phantom. Alignment parameters can be estimated using one pinhole projection of line sources. This alignment method may be important for multi-pinhole SPECT, where relative pinhole alignment may vary during rotation. For single pinhole and multi-pinhole SPECT imaging onboard radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. In simulation studies of prone breast imaging and respiratory-gated lung imaging, the 49-pinhole detector showed better tumor contrast recovery and localization in a 4-minute scan compared to parallel-hole detector. On-board SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction.
Resumo:
The WASP project and infrastructure supporting the SuperWASP Facility are described. As the instrument, reduction pipeline and archive system are now fully operative we expect the system to have a major impact in the discovery of bright exo-planet candidates as well in more general variable star projects.
Resumo:
A Thomson scattering system has been installed at the Tokyo electron beam ion trap for probing characteristics of the electron beam. A YVO4 green laser beam was injected antiparallel to the electron beam. The image of the Thomson scattering light from the electron beam has been observed using a charged-coupled device camera. By using a combination of interference filters, the spectral distribution of the Thomson scattering light has been measured. The Doppler shift observed for the scattered light is consistent with the beam energy. The beam radius dependence was investigated as a function of the beam energy, the beam current, and the magnetic field at the trap region. The variation of the measured beam radius against the beam current and the magnetic field were similar to those in Herrmann's prediction. The beam radius as a function of the beam energy was also similar to Herrmann's prediction but seemed to become larger at low energy. (C) 2002 American Institute of Physics.
Resumo:
Utilising cameras as a means to survey the surrounding environment is becoming increasingly popular in a number of different research areas and applications. Central to using camera sensors as input to a vision system, is the need to be able to manipulate and process the information captured in these images. One such application, is the use of cameras to monitor the quality of airport landing lighting at aerodromes where a camera is placed inside an aircraft and used to record images of the lighting pattern during the landing phase of a flight. The images are processed to determine a performance metric. This requires the development of custom software for the localisation and identification of luminaires within the image data. However, because of the necessity to keep airport operations functioning as efficiently as possible, it is difficult to collect enough image data to develop, test and validate any developed software. In this paper, we present a technique to model a virtual landing lighting pattern. A mathematical model is postulated which represents the glide path of the aircraft including random deviations from the expected path. A morphological method has been developed to localise and track the luminaires under different operating conditions. © 2011 IEEE.
Resumo:
In this paper, we present a unique cross-layer design framework that allows systematic exploration of the energy-delay-quality trade-offs at the algorithm, architecture and circuit level of design abstraction for each block of a system. In addition, taking into consideration the interactions between different sub-blocks of a system, it identifies the design solutions that can ensure the least energy at the "right amount of quality" for each sub-block/system under user quality/delay constraints. This is achieved by deriving sensitivity based design criteria, the balancing of which form the quantitative relations that can be used early in the system design process to evaluate the energy efficiency of various design options. The proposed framework when applied to the exploration of energy-quality design space of the main blocks of a digital camera and a wireless receiver, achieves 58% and 33% energy savings under 41% and 20% error increase, respectively. © 2010 ACM.
Resumo:
Oceans - San Diego, 2013
Resumo:
This work presents an automatic calibration method for a vision based external underwater ground-truth positioning system. These systems are a relevant tool in benchmarking and assessing the quality of research in underwater robotics applications. A stereo vision system can in suitable environments such as test tanks or in clear water conditions provide accurate position with low cost and flexible operation. In this work we present a two step extrinsic camera parameter calibration procedure in order to reduce the setup time and provide accurate results. The proposed method uses a planar homography decomposition in order to determine the relative camera poses and the determination of vanishing points of detected lines in the image to obtain the global pose of the stereo rig in the reference frame. This method was applied to our external vision based ground-truth at the INESC TEC/Robotics test tank. Results are presented in comparison with an precise calibration performed using points obtained from an accurate 3D LIDAR modelling of the environment.
Resumo:
In this work we propose the development of a stereo SLS system for underwater inspection operations. We demonstrate how to perform a SLS calibration both in dry and underwater environments using two different methods. The proposed methodology is able to achieve quite accurate results, lower than 1 mm in dry environments. We also display a 3D underwater scan of a known object size, a sea scallop, where the system is able to perform a scan with a global error lower than 2% of the object size.
Resumo:
We propose a new method, based on inertial sensors, to automatically measure at high frequency the durations of the main phases of ski jumping (i.e. take-off release, take-off, and early flight). The kinematics of the ski jumping movement were recorded by four inertial sensors, attached to the thigh and shank of junior athletes, for 40 jumps performed during indoor conditions and 36 jumps in field conditions. An algorithm was designed to detect temporal events from the recorded signals and to estimate the duration of each phase. These durations were evaluated against a reference camera-based motion capture system and by trainers conducting video observations. The precision for the take-off release and take-off durations (indoor < 39 ms, outdoor = 27 ms) can be considered technically valid for performance assessment. The errors for early flight duration (indoor = 22 ms, outdoor = 119 ms) were comparable to the trainers' variability and should be interpreted with caution. No significant changes in the error were noted between indoor and outdoor conditions, and individual jumping technique did not influence the error of take-off release and take-off. Therefore, the proposed system can provide valuable information for performance evaluation of ski jumpers during training sessions.
Resumo:
Cette thèse s'intéresse à des aspects du tournage, de la projection et de la perception du cinéma stéréo panoramique, appelé aussi cinéma omnistéréo. Elle s'inscrit en grande partie dans le domaine de la vision par ordinateur, mais elle touche aussi aux domaines de l'infographie et de la perception visuelle humaine. Le cinéma omnistéréo projette sur des écrans immersifs des vidéos qui fournissent de l'information sur la profondeur de la scène tout autour des spectateurs. Ce type de cinéma comporte des défis liés notamment au tournage de vidéos omnistéréo de scènes dynamiques, à la projection polarisée sur écrans très réfléchissants rendant difficile l'estimation de leur forme par reconstruction active, aux distorsions introduites par l'omnistéréo pouvant fausser la perception des profondeurs de la scène. Notre thèse a tenté de relever ces défis en apportant trois contributions majeures. Premièrement, nous avons développé la toute première méthode de création de vidéos omnistéréo par assemblage d'images pour des mouvements stochastiques et localisés. Nous avons mis au point une expérience psychophysique qui montre l'efficacité de la méthode pour des scènes sans structure isolée, comme des courants d'eau. Nous proposons aussi une méthode de tournage qui ajoute à ces vidéos des mouvements moins contraints, comme ceux d'acteurs. Deuxièmement, nous avons introduit de nouveaux motifs lumineux qui permettent à une caméra et un projecteur de retrouver la forme d'objets susceptibles de produire des interréflexions. Ces motifs sont assez généraux pour reconstruire non seulement les écrans omnistéréo, mais aussi des objets très complexes qui comportent des discontinuités de profondeur du point de vue de la caméra. Troisièmement, nous avons montré que les distorsions omnistéréo sont négligeables pour un spectateur placé au centre d'un écran cylindrique, puisqu'elles se situent à la périphérie du champ visuel où l'acuité devient moins précise.
Resumo:
Timely detection of sudden change in dynamics that adversely affect the performance of systems and quality of products has great scientific relevance. This work focuses on effective detection of dynamical changes of real time signals from mechanical as well as biological systems using a fast and robust technique of permutation entropy (PE). The results are used in detecting chatter onset in machine turning and identifying vocal disorders from speech signal.Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. Here we propose the use of permutation entropy (PE), to detect the dynamical changes in two non linear processes, turning under mechanical system and speech under biological system.Effectiveness of PE in detecting the change in dynamics in turning process from the time series generated with samples of audio and current signals is studied. Experiments are carried out on a lathe machine for sudden increase in depth of cut and continuous increase in depth of cut on mild steel work pieces keeping the speed and feed rate constant. The results are applied to detect chatter onset in machining. These results are verified using frequency spectra of the signals and the non linear measure, normalized coarse-grained information rate (NCIR).PE analysis is carried out to investigate the variation in surface texture caused by chatter on the machined work piece. Statistical parameter from the optical grey level intensity histogram of laser speckle pattern recorded using a charge coupled device (CCD) camera is used to generate the time series required for PE analysis. Standard optical roughness parameter is used to confirm the results.Application of PE in identifying the vocal disorders is studied from speech signal recorded using microphone. Here analysis is carried out using speech signals of subjects with different pathological conditions and normal subjects, and the results are used for identifying vocal disorders. Standard linear technique of FFT is used to substantiate thc results.The results of PE analysis in all three cases clearly indicate that this complexity measure is sensitive to change in regularity of a signal and hence can suitably be used for detection of dynamical changes in real world systems. This work establishes the application of the simple, inexpensive and fast algorithm of PE for the benefit of advanced manufacturing process as well as clinical diagnosis in vocal disorders.
Resumo:
Die thermische Verarbeitung von Lebensmitteln beeinflusst deren Qualität und ernährungsphysiologischen Eigenschaften. Im Haushalt ist die Überwachung der Temperatur innerhalb des Lebensmittels sehr schwierig. Zudem ist das Wissen über optimale Temperatur- und Zeitparameter für die verschiedenen Speisen oft unzureichend. Die optimale Steuerung der thermischen Zubereitung ist maßgeblich abhängig von der Art des Lebensmittels und der äußeren und inneren Temperatureinwirkung während des Garvorgangs. Das Ziel der Arbeiten war die Entwicklung eines automatischen Backofens, der in der Lage ist, die Art des Lebensmittels zu erkennen und die Temperatur im Inneren des Lebensmittels während des Backens zu errechnen. Die für die Temperaturberechnung benötigten Daten wurden mit mehreren Sensoren erfasst. Hierzu kam ein Infrarotthermometer, ein Infrarotabstandssensor, eine Kamera, ein Temperatursensor und ein Lambdasonde innerhalb des Ofens zum Einsatz. Ferner wurden eine Wägezelle, ein Strom- sowie Spannungs-Sensor und ein Temperatursensor außerhalb des Ofens genutzt. Die während der Aufheizphase aufgenommen Datensätze ermöglichten das Training mehrerer künstlicher neuronaler Netze, die die verschiedenen Lebensmittel in die entsprechenden Kategorien einordnen konnten, um so das optimale Backprogram auszuwählen. Zur Abschätzung der thermische Diffusivität der Nahrung, die von der Zusammensetzung (Kohlenhydrate, Fett, Protein, Wasser) abhängt, wurden mehrere künstliche neuronale Netze trainiert. Mit Ausnahme des Fettanteils der Lebensmittel konnten alle Komponenten durch verschiedene KNNs mit einem Maximum von 8 versteckten Neuronen ausreichend genau abgeschätzt werden um auf deren Grundlage die Temperatur im inneren des Lebensmittels zu berechnen. Die durchgeführte Arbeit zeigt, dass mit Hilfe verschiedenster Sensoren zur direkten beziehungsweise indirekten Messung der äußeren Eigenschaften der Lebensmittel sowie KNNs für die Kategorisierung und Abschätzung der Lebensmittelzusammensetzung die automatische Erkennung und Berechnung der inneren Temperatur von verschiedensten Lebensmitteln möglich ist.