199 resultados para camera motion


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Organ motion as a result of respiration is an important field of research for medical physics. Knowledge of magnitude and direction of this motion is necessary to allow for more accurate radiotherapy treatment planning. This will result in higher doses to the tumour whilst sparing healthy tissue. This project involved human trials, where the radiation therapy patient's kidneys were CT scanned under three different conditions; whilst free breathing (FB), breath-hold at normal tidal inspiration (BHIN), and breath-hold at normal tidal expiration (BHEX). The magnitude of motion was measured by recording the outline of the kidney from a Beam's Eye View (BEV). The centre of mass of this 2D shape was calculated for each set using "ImageJ" software and the magnitude of movement determined from the change in the centroid's coordinates between the BHIN and BHEX scans. The movement ranged from, for the left and right kidneys, 4-46mm and 2-44mm in the superior/inferior (axial) plane, 1-21mm and 2- 16mm in the anterior/posterior (coronal) plane, and 0-6mm and 0-8mm in the lateral/medial (sagittal) plane. From exhale to inhale, the kidneys tended to move inferiorly, anteriorly and laterally. A standard radiotherapy plan, designed to treat the para-aortics with opposed lateral fields was performed on the free breathing (planning) CT set. The field size and arrangement was set up using the same parameters for each subject. The prescription was to deliver 45 Gray in 25 fractions. This field arrangement and prescription was then copied over to the breath hold CT sets, and the dosimetric differences were compared using Dose Volume Histograms (DVH). The point of comparison for the three sets was recorded as the percentage volume of kidney receiving less than or equal to 10 Gray. The QUASAR respiratory motion phantom was used with the range of motion determined from the human study. The phantom was imaged, planned and treated with a linear accelerator with dose determined by film. The effect of the motion was measured by the change in the penumbra of the film and compared to the penumbra from the treatment planning system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To describe barefoot, shod and in-shoe kinematics during stance phase of walking gait in a normal arched adult population. An equal sample of males and females (n = 24) was recruited. In order to quantify the effect of footwear independent of technical design features, an ASICS shoe (Onitsuka Tiger-Mexico 66, Japan) was used in this study. Markers were applied to three conditions; barefoot, shod, and in-shoe. The calibration markers were used to define static pose. The order of testing was randomised. Participants completed five trials in each condition. Kinematic data were captured using a 12 camera VICON MX40 motion capture system at 100 Hz and processed in Visual3D. A previously developed model was used to describe joint angles [1]. A univariate two-way ANOVA was used to identify any differences between the pairs of conditions. Post-hoc Sheffé tests were used to further interrogate the data for differences. At peak hallux dorsiflexion (Figure 1), during propulsion, the metatarsophalangeal joint (MPTJ) was significantly more dorsiflexed in the barefoot condition compared to the shod condition (p = 0.004). At the same gait event, the tibiocalcaneal joint (TCJ) was significantly more plantarflexed than both the shod and in-shoe conditions (p < 0.001), and the tarsometatarsal joint (TMTJ) was significantly less dorsiflexed in the barefoot condition compared to the shod and in-shoe conditions (p < 0.001). The findings of the current study demonstrate that footwear has significant effects on sagittal plane MPTJ joint dorsiflexion at peak hallux dorsiflexion, which results in compensations at proximal foot joints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“Made by Motion” is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In this first series the visual investigation focused on manipulating the movement date at the instance of capture, capture been the recording of three-dimensional movement as ‘seen’ by the hardware and ‘understood’ through the calibration of software. By repositioning the capture hardware on the body we can effectively change how the same sequence of movements is ‘seen’ by the motion capture system thus generating a different visual result from effetely identical movement. The outcomes from the experiments clearly demonstrates the effectiveness of using motion capture hardware as a creative tool to manipulate the perception of the capture subject, in this case been a sequence of dance movements. The creative work exhibited is a cross-section of the experiments conducted in practice with the first animated work (Movement A - Control) using the motion capture hardware in its default ‘normal’ configuration. Following this is the lower body moved to the upper body (Lb-Ub), right arm moved onto the left arm (Ra-La), right leg moved onto the left leg (Rl-Ll) and finally the left leg moved onto a object that is then held in the left hand (Ll-Pf (Lh)).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. Made by Motion is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generate experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN – Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In my presentation I will be displaying and discussing a selected creative works from the project along with the process and considerations behind the work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cell-to-cell adhesion is an important aspect of malignant spreading that is often observed in images from the experimental cell biology literature. Since cell-to-cell adhesion plays an important role in controlling the movement of individual malignant cells, it is likely that cell-to-cell adhesion also influences the spatial spreading of populations of such cells. Therefore, it is important for us to develop biologically realistic simulation tools that can mimic the key features of such collective spreading processes to improve our understanding of how cell-to-cell adhesion influences the spreading of cell populations. Previous models of collective cell spreading with adhesion have used lattice-based random walk frameworks which may lead to unrealistic results, since the agents in the random walk simulations always move across an artificial underlying lattice structure. This is particularly problematic in high-density regions where it is clear that agents in the random walk align along the underlying lattice, whereas no such regular alignment is ever observed experimentally. To address these limitations, we present a lattice-free model of collective cell migration that explicitly incorporates crowding and adhesion. We derive a partial differential equation description of the discrete process and show that averaged simulation results compare very well with numerical solutions of the partial differential equation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At the highest level of competitive sport, nearly all performances of athletes (both training and competitive) are chronicled using video. Video is then often viewed by expert coaches/analysts who then manually label important performance indicators to gauge performance. Stroke-rate and pacing are important performance measures in swimming, and these are previously digitised manually by a human. This is problematic as annotating large volumes of video can be costly, and time-consuming. Further, since it is difficult to accurately estimate the position of the swimmer at each frame, measures such as stroke rate are generally aggregated over an entire swimming lap. Vision-based techniques which can automatically, objectively and reliably track the swimmer and their location can potentially solve these issues and allow for large-scale analysis of a swimmer across many videos. However, the aquatic environment is challenging due to fluctuations in scene from splashes, reflections and because swimmers are frequently submerged at different points in a race. In this paper, we temporally segment races into distinct and sequential states, and propose a multimodal approach which employs individual detectors tuned to each race state. Our approach allows the swimmer to be located and tracked smoothly in each frame despite a diverse range of constraints. We test our approach on a video dataset compiled at the 2012 Australian Short Course Swimming Championships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces an improved line tracker using IMU and vision data for visual servoing tasks. We utilize an Image Jacobian which describes motion of a line feature to corresponding camera movements. These camera motions are estimated using an IMU. We demonstrate impacts of the proposed method in challenging environments: maximum angular rate ~160 0/s, acceleration ~6m /s2 and in cluttered outdoor scenes. Simulation and quantitative tracking performance comparison with the Visual Servoing Platform (ViSP) are also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Camera-laser calibration is necessary for many robotics and computer vision applications. However, existing calibration toolboxes still require laborious effort from the operator in order to achieve reliable and accurate results. This paper proposes algorithms that augment two existing trustful calibration methods with an automatic extraction of the calibration object from the sensor data. The result is a complete procedure that allows for automatic camera-laser calibration. The first stage of the procedure is automatic camera calibration which is useful in its own right for many applications. The chessboard extraction algorithm it provides is shown to outperform openly available techniques. The second stage completes the procedure by providing automatic camera-laser calibration. The procedure has been verified by extensive experimental tests with the proposed algorithms providing a major reduction in time required from an operator in comparison to manual methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many applications can benefit from the accurate surface temperature estimates that can be made using a passive thermal-infrared camera. However, the process of radiometric calibration which enables this can be both expensive and time consuming. An ad hoc approach for performing radiometric calibration is proposed which does not require specialized equipment and can be completed in a fraction of the time of the conventional method. The proposed approach utilizes the mechanical properties of the camera to estimate scene temperatures automatically, and uses these target temperatures to model the effect of sensor temperature on the digital output. A comparison with a conventional approach using a blackbody radiation source shows that the accuracy of the method is sufficient for many tasks requiring temperature estimation. Furthermore, a novel visualization method is proposed for displaying the radiometrically calibrated images to human operators. The representation employs an intuitive coloring scheme and allows the viewer to perceive a large variety of temperatures accurately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Emotions are inherently social, and are central to learning, online interaction and literacy practices (Shen, Wang, & Shen, 2009). Demonstrating the dynamic sociality of literacy practice, we used e-motion diaries or web logs to explore the emotional states of pre-service high school teachers’ experiences of online learning activities. This is because the methods of communication used by university educators in online learning and writing environments play an important role in fulfilling students’ need for social interaction and inclusion (McInnerney & Roberts, 2004). Feelings of isolation and frustration are common emotions experienced by students in many online learning environments, and are associated with the success or failure of online interactions and learning (Su, et al., 2005). The purpose of the study was to answer the research question: What are the trajectories of pre-service teachers’ emotional states during online learning experiences? This is important because emotions are central to learning, and the current trend toward Massive Open Online Courses (MOOCs) needs research about students’ emotional connections in online learning environments (Kop, 2011). The project was conducted with a graduate class of 64 high school science pre-service teachers in Science Education Curriculum Studies in a large Australian university, including males and females from a variety of cultural backgrounds, aged 22-55 years. Online activities involved the students watching a series of streamed live lectures for the first 5 weeks providing a varied set of learning experiences, such as viewing science demonstrations (e.g., modeling the use of discrepant events). Each week, students provided feedback on learning by writing and posting an e-motion diary or web log about their emotional response. Students answered the question: What emotions did you experience during this learning experience? The descriptive data set included 284 online posts, with students contributing multiple entries. Linguistic appraisal theory, following Martin and White (2005), was used to regroup the 22 different discrete emotions reported by students into the six main affect groups – three positive and three negative: unhappiness/happiness, insecurity/security, and dissatisfaction/satisfaction. The findings demonstrated that the pre-service teachers’ emotional responses to the streamed lectures tended towards happiness, security, and satisfaction within the typology of affect groups – un/happiness, in/security, and dis/satisfaction. Fewer students reported that the streamed lectures triggered negative feelings of frustration, powerlessness, and inadequacy, and when this occurred, it often pertained to expectations of themselves in the forthcoming field experience in classrooms. Exceptions to this pattern of responses occurred in relation to the fifth streamed lecture presented in a non-interactive slideshow format that compressed a large amount of content. Many students responded to the content of the lecture rather than providing their emotional responses to this lecture, and one student felt “completely disengaged”. The social practice of online writing as blogs enabled the students to articulate their emotions. The findings primarily contribute new understanding about students' wide range of differing emotional states, both positive and negative, experienced in response to streamed live lectures and other learning activities in higher education external coursework. The is important because the majority of previous studies have focused on particular negative emotions, such as anxiety in test taking. The research also highlights the potentials of appraisal theory for studying human emotions in online learning and writing.