796 resultados para motion sensor


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Previous assessment methods for PG recognition used sensor mechanisms for PG that may cause discomfort. In order to avoid stress of applying wearable sensors, computer vision (CV) based diagnostic systems for PG recognition have been proposed. Main constraints in these methods are the laboratory setup procedures: Novel colored dresses for the patients were specifically designed to segment the test body from a specific colored background. Objective: To develop an image processing tool for home-assessment of Parkinson Gait(PG) by analyzing motion cues extracted during the gait cycles. Methods: The system is based on the idea that a normal body attains equilibrium during the gait by aligning the body posture with the axis of gravity. Due to the rigidity in muscular tone, persons with PD fail to align their bodies with the axis of gravity. The leaned posture of PD patients appears to fall forward. Whereas a normal posture exhibits a constant erect posture throughout the gait. Patients with PD walk with shortened stride angle (less than 15 degrees on average) with high variability in the stride frequency. Whereas a normal gait exhibits a constant stride frequency with an average stride angle of 45 degrees. In order to analyze PG, levodopa-responsive patients and normal controls were videotaped with several gait cycles. First, the test body is segmented in each frame of the gait video based on the pixel contrast from the background to form a silhouette. Next, the center of gravity of this silhouette is calculated. This silhouette is further skeletonized from the video frames to extract the motion cues. Two motion cues were stride frequency based on the cyclic leg motion and the lean frequency based on the angle between the leaned torso tangent and the axis of gravity. The differences in the peaks in stride and lean frequencies between PG and normal gait are calculated using Cosine Similarity measurements. Results: High cosine dissimilarity was observed in the stride and lean frequencies between PG and normal gait. High variations are found in the stride intervals of PG whereas constant stride intervals are found in the normal gait. Conclusions: We propose an algorithm as a source to eliminate laboratory constraints and discomfort during PG analysis. Installing this tool in a home computer with a webcam allows assessment of gait in the home environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A semi-autonomous unmanned underwater vehicle (UUV), named LAURS, is being developed at the Laboratory of Sensors and Actuators at the University of Sao Paulo. The vehicle has been designed to provide inspection and intervention capabilities in specific missions of deep water oil fields. In this work, a method of modeling and identification of yaw motion dynamic system model of an open-frame underwater vehicle is presented. Using an on-board low cost magnetic compass sensor the method is based on the utilization of an uncoupled 1-DOF (degree of freedom) dynamic system equation and the application of the integral method which is the classical least squares algorithm applied to the integral form of the dynamic system equations. Experimental trials with the actual vehicle have been performed in a test tank and diving pool. During these experiments, thrusters responsible for yaw motion are driven by sinusoidal voltage signal profiles. An assessment of the feasibility of the method reveals that estimated dynamic system models are more reliable when considering slow and small sinusoidal voltage signal profiles, i.e. with larger periods and with relatively small amplitude and offset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Spinal fusion is a widely and successfully performed strategy for the treatment of spinal deformities and degenerative diseases. The general approach has been to stabilize the spine with implants so that a solid bony fusion between the vertebrae can develop. However, new implant designs have emerged that aim at preservation or restoration of the motion of the spinal segment. In addition to static, load sharing principles, these designs also require a profound knowledge of kinematic and dynamic properties to properly characterise the in vivo performance of the implants. Methods: To address this, an apparatus was developed that enables the intraoperative determination of the load–displacement behavior of spinal motion segments. The apparatus consists of a sensor-equipped distractor to measure the applied force between the transverse processes, and an optoelectronic camera to track the motion of vertebrae and the distractor. In this intraoperative trial, measurements from two patients with adolescent idiopathic scoliosis with right thoracic curves were made at four motion segments each. Results: At a lateral bending moment of 5 N m, the mean flexibility of all eight motion segments was 0.18 ± 0.08°/N m on the convex side and 0.24 ± 0.11°/N m on the concave side. Discussion: The results agree with published data obtained from cadaver studies with and without axial preload. Intraoperatively acquired data with this method may serve as an input for mathematical models and contribute to the development of new implants and treatment strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here an inertial sensor-based monitoring system for measuring and analyzing upper limb movements is presented. The final goal is the integration of this motion-tracking device within a portable rehabilitation system for brain injury patients. A set of four inertial sensors mounted on a special garment worn by the patient provides the quaternions representing the patient upper limb’s orientation in space. A kinematic model is built to estimate 3D upper limb motion for accurate therapeutic evaluation. The human upper limb is represented as a kinematic chain of rigid bodies with three joints and six degrees of freedom. Validation of the system has been performed by co-registration of movements with a commercial optoelectronic tracking system. Successful results are shown that exhibit a high correlation among signals provided by both devices and obtained at the Institut Guttmann Neurorehabilitation Hospital.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensing systems in living bodies offer a large variety of possible different configurations and philosophies able to be emulated in artificial sensing systems. Motion detection is one of the areas where different animals adopt different solutions and, in most of the cases, these solutions reflect a very sophisticated form. One of them, the mammalian visual system, presents several advantages with respect to the artificial ones. The main objective of this paper is to present a system, based on this biological structure, able to detect motion, its sense and its characteristics. The configuration adopted responds to the internal structure of the mammalian retina, where just five types of cells arranged in five layers are able to differentiate a large number of characteristics of the image impinging onto it. Its main advantage is that the detection of these properties is based purely on its hardware. A simple unit, based in a previous optical logic cell employed in optical computing, is the basis for emulating the different behaviors of the biological neurons. No software is present and, in this way, no possible interference from outside affects to the final behavior. This type of structure is able to work, once the internal configuration is implemented, without any further attention. Different possibilities are present in the architecture to be presented: detection of motion, of its direction and intensity. Moreover, some other characteristics, as symmetry may be obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Upper limb function impairment is one of the most common sequelae of central nervous system injury, especially in stroke patients and when spinal cord injury produces tetraplegia. Conventional assessment methods cannot provide objective evaluation of patient performance and the tiveness of therapies. The most common assessment tools are based on rating scales, which are inefficient when measuring small changes and can yield subjective bias. In this study, we designed an inertial sensor-based monitoring system composed of five sensors to measure and analyze the complex movements of the upper limbs, which are common in activities of daily living. We developed a kinematic model with nine degrees of freedom to analyze upper limb and head movements in three dimensions. This system was then validated using a commercial optoelectronic system. These findings suggest that an inertial sensor-based motion tracking system can be used in patients who have upper limb impairment through data integration with a virtual reality-based neuroretation system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.

Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.

Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with

little or no prior knowledge

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 μW) enables the sensor’s use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories – up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structure of an animal’s eye is determined by the tasks it must perform. While vertebrates rely on their two eyes for all visual functions, insects have evolved a wide range of specialized visual organs to support behaviors such as prey capture, predator evasion, mate pursuit, flight stabilization, and navigation. Compound eyes and ocelli constitute the vision forming and sensing mechanisms of some flying insects. They provide signals useful for flight stabilization and navigation. In contrast to the well-studied compound eye, the ocelli, seen as the second visual system, sense fast luminance changes and allows for fast visual processing. Using a luminance-based sensor that mimics the insect ocelli and a camera-based motion detection system, a frequency-domain characterization of an ocellar sensor and optic flow (due to rotational motion) are analyzed. Inspired by the insect neurons that make use of signals from both vision sensing mechanisms, advantages, disadvantages and complementary properties of ocellar and optic flow estimates are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.