9 resultados para choreography for the camera

em Digital Commons - Michigan Tech


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Magmatic volatiles play a crucial role in volcanism, from magma production at depth to generation of seismic phenomena to control of eruption style. Accordingly, many models of volcano dynamics rely heavily on behavior of such volatiles. Yet measurements of emission rates of volcanic gases have historically been limited, which has restricted model verification to processes on the order of days or longer. UV cameras are a recent advancement in the field of remote sensing of volcanic SO2 emissions. They offer enhanced temporal and spatial resolution over previous measurement techniques, but need development before they can be widely adopted and achieve the promise of integration with other geophysical datasets. Large datasets require a means by which to quickly and efficiently use imagery to calculate emission rates. We present a suite of programs designed to semi-automatically determine emission rates of SO2 from series of UV images. Extraction of high temporal resolution SO2 emission rates via this software facilitates comparison of gas data to geophysical data for the purposes of evaluating models of volcanic activity and has already proven useful at several volcanoes. Integrated UV camera and seismic measurements recorded in January 2009 at Fuego volcano, Guatemala, provide new insight into the system’s shallow conduit processes. High temporal resolution SO2 data reveal patterns of SO2 emission rate relative to explosions and seismic tremor that indicate tremor and degassing share a common source process. Progressive decreases in emission rate appear to represent inhibition of gas loss from magma as a result of rheological stiffening in the upper conduit. Measurements of emission rate from two closely-spaced vents, made possible by the high spatial resolution of the camera, help constrain this model. UV camera measurements at Kilauea volcano, Hawaii, in May of 2010 captured two occurrences of lava filling and draining within the summit vent. Accompanying high lava stands were diminished SO2 emission rates, decreased seismic and infrasonic tremor, minor deflation, and slowed lava lake surface velocity. Incorporation of UV camera data into the multi-parameter dataset gives credence to the likelihood of shallow gas accumulation as the cause of such events.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aging population has become a burning issue for all modern societies around the world recently. There are two important issues existing now to be solved. One is how to continuously monitor the movements of those people having suffered a stroke in natural living environment for providing more valuable feedback to guide clinical interventions. The other one is how to guide those old people effectively when they are at home or inside other buildings and to make their life easier and convenient. Therefore, human motion tracking and navigation have been active research fields with the increasing number of elderly people. However, motion capture has been extremely challenging to go beyond laboratory environments and obtain accurate measurements of human physical activity especially in free-living environments, and navigation in free-living environments also poses some problems such as the denied GPS signal and the moving objects commonly presented in free-living environments. This thesis seeks to develop new technologies to enable accurate motion tracking and positioning in free-living environments. This thesis comprises three specific goals using our developed IMU board and the camera from the imaging source company: (1) to develop a robust and real-time orientation algorithm using only the measurements from IMU; (2) to develop a robust distance estimation in static free-living environments to estimate people’s position and navigate people in static free-living environments and simultaneously the scale ambiguity problem, usually appearing in the monocular camera tracking, is solved by integrating the data from the visual and inertial sensors; (3) in case of moving objects viewed by the camera existing in free-living environments, to firstly design a robust scene segmentation algorithm and then respectively estimate the motion of the vIMU system and moving objects. To achieve real-time orientation tracking, an Adaptive-Gain Orientation Filter (AGOF) is proposed in this thesis based on the basic theory of deterministic approach and frequency-based approach using only measurements from the newly developed MARG (Magnet, Angular Rate, and Gravity) sensors. To further obtain robust positioning, an adaptive frame-rate vision-aided IMU system is proposed to develop and implement fast vIMU ego-motion estimation algorithms, where the orientation is estimated in real time from MARG sensors in the first step and then used to estimate the position based on the data from visual and inertial sensors. In case of the moving objects viewed by the camera existing in free-living environments, a robust scene segmentation algorithm is firstly proposed to obtain position estimation and simultaneously the 3D motion of moving objects. Finally, corresponding simulations and experiments have been carried out.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Retaining walls are important assets in the transportation infrastructure and assessing their condition is important to prolong their performance and ultimately their design life. Retaining walls are often overlooked and only a few transportation asset management programs consider them in their inventory. Because these programs are few, the techniques used to assess their condition focus on a qualitative assessment as opposed to a quantitative approach. The work presented in this thesis focuses on using photogrammetry to quantitatively assess the condition of retaining walls. Multitemporal photogrammetry is used to develop 3D models of the retaining walls, from which offset displacements are measured to assess their condition. This study presents a case study from a site along M-10 highway in Detroit, MI were several sections of retaining walls have experienced horizontal displacement towards the highway. The results are validated by comparing with field observations and measurements. The limitations of photogrammetry were also studied by using a small scale model in the laboratory. The analysis found that the accuracy of the offset displacement measurements is dependent on the distance between the retaining wall and the sensor, location of the reference points in 3D space, and the focal length of the lenses used by the camera. These parameters were not ideal for the case study at the M-10 highway site, but the results provided consistent trends in the movement of the retaining wall that couldn’t be validated from offset measurements. The findings of this study confirm that photogrammetry shows promise in generating 3D models to provide a quantitative condition assessment for retaining walls within its limitations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The bridge inspection industry has yet to utilize a rapidly growing technology that shows promise to help improve the inspection process. This thesis investigates the abilities that 3D photogrammetry is capable of providing to the bridge inspector for a number of deterioration mechanisms. The technology can provide information about the surface condition of some bridge components, primarily focusing on the surface defects of a concrete bridge which include cracking, spalling and scaling. Testing was completed using a Canon EOS 7D camera which then processed photos using AgiSoft PhotoScan to align the photos and develop models. Further processing of the models was done using ArcMap in the ArcGIS 10 program to view the digital elevation models of the concrete surface. Several experiments were completed to determine the ability of the technique for the detection of the different defects. The cracks that were able to be resolved in this study were a 1/8 inch crack at a distance of two feet above the surface. 3D photogrammetry was able to be detect a depression of 1 inch wide with 3/16 inch depth which would be sufficient to measure any scaling or spalling that would be required be the inspector. The percentage scaled or spalled was also able to be calculated from the digital elevation models in ArcMap. Different camera factors including the distance from the defects, number of photos and angle, were also investigated to see how each factor affected the capabilities. 3D photogrammetry showed great promise in the detection of scaling or spalling of the concrete bridge surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exsolution of volatiles from magma maintains an important control on volcanic eruption styles. The nucleation, growth, and connectivity of bubbles during magma ascent provide the driving force behind eruptions, and the rate, volume, and ease of gas exsolution can affect eruptive activity. Volcanic plumes are the observable consequence of this magmatic degassing, and remote sensing techniques allow us to quantify changes in gas exsolution. However, until recently the methods used to measure volcanic plumes did not have the capability of detecting rapid changes in degassing on the scale of standard geophysical observations. The advent of the UV camera now makes high sample rate gas measurements possible. This type of dataset can then be compared to other volcanic observations to provide an in depth picture of degassing mechanisms in the shallow conduit. The goals of this research are to develop a robust methodology for UV camera field measurements of volcanic plumes, and utilize this data in conjunction with seismoacoustic records to illuminate degassing processes. Field and laboratory experiments were conducted to determine the effects of imaging conditions, vignetting, exposure time, calibration technique, and filter usage on the UV camera sulfur dioxide measurements. Using the best practices determined from these studies, a field campaign was undertaken at Volcán de Pacaya, Guatemala. Coincident plume sulfur dioxide measurements, acoustic recordings, and seismic observations were collected and analyzed jointly. The results provide insight into the small explosive features, variations in degassing rate, and plumbing system of this complex volcanic system. This research provides useful information for determining volcanic hazard at Pacaya, and demonstrates the potential of the UV camera in multiparameter studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During locomotion, turning is a common and recurring event which is largely neglected in the current state-of-the-art ankle-foot prostheses, forcing amputees to use different steering mechanisms for turning, compared to non-amputees. A better understanding of the complexities surrounding lower limb prostheses will lead to increased health and well-being of amputees. The aim of this research is to develop a steerable ankle-foot prosthesis that mimics the human ankle mechanical properties. Experiments were developed to estimate the mechanical impedance of the ankle and the ankles angles during straight walk and step turn. Next, this information was used in the design of a prototype, powered steerable ankle-foot prosthesis with two controllable degrees of freedom. One of the possible approaches in design of the prosthetic robots is to use the human joints’ parameters, especially their impedance. A series of experiments were conducted to estimate the stochastic mechanical impedance of the human ankle when muscles were fully relaxed and co-contracting antagonistically. A rehabilitation robot for the ankle, Anklebot, was employed to provide torque perturbations to the ankle. The experiments were performed in two different configurations, one with relaxed muscles, and one with 10% of maximum voluntary contraction (MVC). Surface electromyography (sEMG) was used to monitor muscle activation levels and these sEMG signals were displayed to subjects who attempted to maintain them constant. Time histories of ankle torques and angles in the lateral/medial (LM) directions, inversion-eversion (IE), and dorsiflexionplantarflexion (DP) were recorded. Linear time-invariant transfer functions between the measured torques and angles were estimated providing an estimate of ankle mechanical impedance. High coherence was observed over a frequency range up to 30 Hz. The main effect of muscle activation was to increase the magnitude of ankle mechanical impedance in all degrees of freedom of the ankle. Another experiment compared the three-dimensional angles of the ankle during step turn and straight walking. These angles were measured to be used for developing the control strategy of the ankle-foot prosthesis. An infrared camera system was used to track the trajectories and angles of the foot and leg. The combined phases of heel strike and loading response, mid stance, and terminal stance and pre-swing were determined and used to measure the average angles at each combined phase. The Range of motion (ROM) in IE increased during turning while ML rotation decreased and DP changed the least. During the turning step, ankle displacement in DP started with similar angles to straight walk and progressively showed less plantarflexion. In IE, the ankle showed increased inversion leaning the body toward the inside of the turn. ML rotation initiated with an increased medial rotation during the step turn relative to the straight walk transitioning to increased lateral rotation at the toe off. A prototype ankle-foot prosthesis capable of controlling both DP and IE using a cable driven mechanism was developed and assessed as part of a feasibility study. The design is capable of reproducing the angles required for straight walk and step turn; generates 712N of lifting force in plantarflexion, and shows passive stiffness comparable to a nonload bearing ankle impedance. To evaluate the performance of the ankle-foot prosthesis, a circular treadmill was developed to mimic human gait during steering. Preliminary results show that the device can appropriately simulate human gait with loading and unloading the ankle joint during the gait in circular paths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.