5 resultados para Projector-Camera system
em Digital Commons - Michigan Tech
Resumo:
During locomotion, turning is a common and recurring event which is largely neglected in the current state-of-the-art ankle-foot prostheses, forcing amputees to use different steering mechanisms for turning, compared to non-amputees. A better understanding of the complexities surrounding lower limb prostheses will lead to increased health and well-being of amputees. The aim of this research is to develop a steerable ankle-foot prosthesis that mimics the human ankle mechanical properties. Experiments were developed to estimate the mechanical impedance of the ankle and the ankles angles during straight walk and step turn. Next, this information was used in the design of a prototype, powered steerable ankle-foot prosthesis with two controllable degrees of freedom. One of the possible approaches in design of the prosthetic robots is to use the human joints’ parameters, especially their impedance. A series of experiments were conducted to estimate the stochastic mechanical impedance of the human ankle when muscles were fully relaxed and co-contracting antagonistically. A rehabilitation robot for the ankle, Anklebot, was employed to provide torque perturbations to the ankle. The experiments were performed in two different configurations, one with relaxed muscles, and one with 10% of maximum voluntary contraction (MVC). Surface electromyography (sEMG) was used to monitor muscle activation levels and these sEMG signals were displayed to subjects who attempted to maintain them constant. Time histories of ankle torques and angles in the lateral/medial (LM) directions, inversion-eversion (IE), and dorsiflexionplantarflexion (DP) were recorded. Linear time-invariant transfer functions between the measured torques and angles were estimated providing an estimate of ankle mechanical impedance. High coherence was observed over a frequency range up to 30 Hz. The main effect of muscle activation was to increase the magnitude of ankle mechanical impedance in all degrees of freedom of the ankle. Another experiment compared the three-dimensional angles of the ankle during step turn and straight walking. These angles were measured to be used for developing the control strategy of the ankle-foot prosthesis. An infrared camera system was used to track the trajectories and angles of the foot and leg. The combined phases of heel strike and loading response, mid stance, and terminal stance and pre-swing were determined and used to measure the average angles at each combined phase. The Range of motion (ROM) in IE increased during turning while ML rotation decreased and DP changed the least. During the turning step, ankle displacement in DP started with similar angles to straight walk and progressively showed less plantarflexion. In IE, the ankle showed increased inversion leaning the body toward the inside of the turn. ML rotation initiated with an increased medial rotation during the step turn relative to the straight walk transitioning to increased lateral rotation at the toe off. A prototype ankle-foot prosthesis capable of controlling both DP and IE using a cable driven mechanism was developed and assessed as part of a feasibility study. The design is capable of reproducing the angles required for straight walk and step turn; generates 712N of lifting force in plantarflexion, and shows passive stiffness comparable to a nonload bearing ankle impedance. To evaluate the performance of the ankle-foot prosthesis, a circular treadmill was developed to mimic human gait during steering. Preliminary results show that the device can appropriately simulate human gait with loading and unloading the ankle joint during the gait in circular paths.
Resumo:
A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.
Resumo:
The exsolution of volatiles from magma maintains an important control on volcanic eruption styles. The nucleation, growth, and connectivity of bubbles during magma ascent provide the driving force behind eruptions, and the rate, volume, and ease of gas exsolution can affect eruptive activity. Volcanic plumes are the observable consequence of this magmatic degassing, and remote sensing techniques allow us to quantify changes in gas exsolution. However, until recently the methods used to measure volcanic plumes did not have the capability of detecting rapid changes in degassing on the scale of standard geophysical observations. The advent of the UV camera now makes high sample rate gas measurements possible. This type of dataset can then be compared to other volcanic observations to provide an in depth picture of degassing mechanisms in the shallow conduit. The goals of this research are to develop a robust methodology for UV camera field measurements of volcanic plumes, and utilize this data in conjunction with seismoacoustic records to illuminate degassing processes. Field and laboratory experiments were conducted to determine the effects of imaging conditions, vignetting, exposure time, calibration technique, and filter usage on the UV camera sulfur dioxide measurements. Using the best practices determined from these studies, a field campaign was undertaken at Volcán de Pacaya, Guatemala. Coincident plume sulfur dioxide measurements, acoustic recordings, and seismic observations were collected and analyzed jointly. The results provide insight into the small explosive features, variations in degassing rate, and plumbing system of this complex volcanic system. This research provides useful information for determining volcanic hazard at Pacaya, and demonstrates the potential of the UV camera in multiparameter studies.
Resumo:
The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.
Resumo:
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.