986 resultados para Camera parameters
Resumo:
Background Commercially available instrumented treadmill systems that provide continuous measures of temporospatial gait parameters have recently become available for clinical gait analysis. This study evaluated the level of agreement between temporospatial gait parameters derived from a new instrumented treadmill, which incorporated a capacitance-based pressure array, with those measured by a conventional instrumented walkway (criterion standard). Methods Temporospatial gait parameters were estimated from 39 healthy adults while walking over an instrumented walkway (GAITRite®) and instrumented treadmill system (Zebris) at matched speed. Differences in temporospatial parameters derived from the two systems were evaluated using repeated measures ANOVA models. Pearson-product-moment correlations were used to investigate relationships between variables measured by each system. Agreement was assessed by calculating the bias and 95% limits of agreement. Results All temporospatial parameters measured via the instrumented walkway were significantly different from those obtained from the instrumented treadmill (P < .01). Temporospatial parameters derived from the two systems were highly correlated (r, 0.79–0.95). The 95% limits of agreement for temporal parameters were typically less than ±2% of gait cycle duration. However, 95% limits of agreement for spatial measures were as much as ±5 cm. Conclusions Differences in temporospatial parameters between systems were small but statistically significant and of similar magnitude to changes reported between shod and unshod gait in healthy young adults. Temporospatial parameters derived from an instrumented treadmill, therefore, are not representative of those obtained from an instrumented walkway and should not be interpreted with reference to literature on overground walking.
Resumo:
Background Despite the emerging use of treadmills integrated with pressure platforms as outcome tools in both clinical and research settings, published evidence regarding the measurement properties of these new systems is limited. This study evaluated the within– and between–day repeatability of spatial, temporal and vertical ground reaction forces measured by a treadmill system instrumented with a capacitance–based pressure platform. Methods Thirty three healthy adults (mean age, 21.5 ± 2.8 years; height, 168.4 ± 9.9 cm; and mass, 67.8 ± 18.6 kg), walked barefoot on a treadmill system (FDM–THM–S, Zebris Medical GmbH) on three separate occasions. For each testing session, participants set their preferred pace but were blinded to treadmill speed. Spatial (foot rotation, step width, stride and step length), temporal (stride and step times, duration of stance, swing and single and double support) and peak vertical ground reaction force variables were collected over a 30–second capture period, equating to an average of 52 ± 5 steps of steady–state walking. Testing was repeated one week following the initial trial and again, for a third time, 20 minutes later. Repeated measures ANOVAs within a generalized linear modelling framework were used to assess between–session differences in gait parameters. Agreement between gait parameters measured within the same day (session 2 and 3) and between days (session 1 and 2; 1 and 3) were evaluated using the 95% repeatability coefficient. Results There were statistically significant differences in the majority (14/16) of temporal, spatial and kinetic gait parameters over the three test sessions (P < .01). The minimum change that could be detected with 95% confidence ranged between 3% and 17% for temporal parameters, 14% and 33% for spatial parameters, and 4% and 20% for kinetic parameters between days. Within–day repeatability was similar to that observed between days. Temporal and kinetic gait parameters were typically more consistent than spatial parameters. The 95% repeatability coefficient for vertical force peaks ranged between ± 53 and ± 63 N. Conclusions The limits of agreement in spatial parameters and ground reaction forces for the treadmill system encompass previously reported changes with neuromuscular pathology and footwear interventions. These findings provide clinicians and researchers with an indication of the repeatability and sensitivity of the Zebris treadmill system to detect changes in common spatiotemporal gait parameters and vertical ground reaction forces.
Resumo:
Carrying capacity assessments model a population’s potential self-sufficiency. A crucial first step in the development of such modelling is to examine the basic resource-based parameters defining the population’s production and consumption habits. These parameters include basic human needs such as food, water, shelter and energy together with climatic, environmental and behavioural characteristics. Each of these parameters imparts land-usage requirements in different ways and varied degrees so their incorporation into carrying capacity modelling also differs. Given that the availability and values of production parameters may differ between locations, no two carrying capacity models are likely to be exactly alike. However, the essential parameters themselves can remain consistent so one example, the Carrying Capacity Dashboard, is offered as a case study to highlight one way in which these parameters are utilised. While examples exist of findings made from carrying capacity assessment modelling, to date, guidelines for replication of such studies in other regions and scales have largely been overlooked. This paper addresses such shortcomings by describing a process for the inclusion and calibration of the most important resource-based parameters in a way that could be repeated elsewhere.
Resumo:
The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.
Resumo:
At the highest level of competitive sport, nearly all performances of athletes (both training and competitive) are chronicled using video. Video is then often viewed by expert coaches/analysts who then manually label important performance indicators to gauge performance. Stroke-rate and pacing are important performance measures in swimming, and these are previously digitised manually by a human. This is problematic as annotating large volumes of video can be costly, and time-consuming. Further, since it is difficult to accurately estimate the position of the swimmer at each frame, measures such as stroke rate are generally aggregated over an entire swimming lap. Vision-based techniques which can automatically, objectively and reliably track the swimmer and their location can potentially solve these issues and allow for large-scale analysis of a swimmer across many videos. However, the aquatic environment is challenging due to fluctuations in scene from splashes, reflections and because swimmers are frequently submerged at different points in a race. In this paper, we temporally segment races into distinct and sequential states, and propose a multimodal approach which employs individual detectors tuned to each race state. Our approach allows the swimmer to be located and tracked smoothly in each frame despite a diverse range of constraints. We test our approach on a video dataset compiled at the 2012 Australian Short Course Swimming Championships.
Resumo:
A new approach is proposed for obtaining a non-linear area-based equivalent model of power systems to express the inter-area oscillations using synchronised phasor measurements. The generators that remain coherent for inter-area disturbances over a wide range of operating conditions define the areas, and the reduced model is obtained by representing each area by an equivalent machine. The parameters of the reduced system are identified by processing the obtained measurements, and a non-linear Kalman estimator is then designed for the estimation of equivalent area angles and frequencies. The simulation of the approach on a two-area system shows substantial reduction of non-inter-area modes in the estimated angles. The proposed methods are also applied to a ten-machine system to illustrate the feasibility of the approach on larger and meshed networks.
Resumo:
BACKGROUND Research on engineering design is a core area of concern within engineering education and a fundamental understanding of how engineering students approach and undertake design is necessary in order to develop effective design models and pedagogies. Understanding the factors related to design experiences in education and how they affect student practice can help educators as well as designers to leverage these factors as part of the design process. PURPOSE This study investigated the design practices of first-year engineering students’ and their experiences with a first-year engineering course design project. The research questions that guided the investigation were: 1. From a student perspective, what design parameters or criteria are most important? 2. How does this perspective impact subsequent student design practice throughout the design process? DESIGN/METHOD The authors employed qualitative multi-case study methods (Miles & Huberman, 1994) in order to the answer the research questions. Participant teams were observed and video recorded during team design meetings in which they researched the background for the design problem, brainstormed and sketched possible solutions, as well as built prototypes and final models of their design solutions as part of a course design project. Analysis focused on explanation building (Yin, 2009) and utilized within-case and cross-case analysis (Miles & Huberman, 1994). RESULTS We found that students focused disproportionally on the functional parameter, i.e. the physical implementation of their solution, and the possible/applicable parameter, i.e. a possible and applicable solution that benefited the user, in comparison to other given parameters such as safety and innovativeness. In addition, we found that individual teams focused on the functional and possible/ applicable parameters in early design phases such as brainstorming/ ideation and sketching. When prompted to discuss these non-salient parameters (from the student perspective) in the final design report, student design teams often used a post-hoc justification to support how the final designs fit the parameters that they did not initially consider. CONCLUSIONS This study suggests is that student design teams become fixated on (and consequently prioritize) certain parameters they interpret as important because they feel these parameters were described more explicitly in terms how they were met and assessed. Students fail to consider other parameters, perceived to be less directly assessable, unless prompted to do so. Failure to consider other parameters in the early design phases subsequently affects their approach in design phases as well. Case studies examining students’ study strategies within three Australian Universities illustrate similarities with some student approaches to design.
Resumo:
Camera-laser calibration is necessary for many robotics and computer vision applications. However, existing calibration toolboxes still require laborious effort from the operator in order to achieve reliable and accurate results. This paper proposes algorithms that augment two existing trustful calibration methods with an automatic extraction of the calibration object from the sensor data. The result is a complete procedure that allows for automatic camera-laser calibration. The first stage of the procedure is automatic camera calibration which is useful in its own right for many applications. The chessboard extraction algorithm it provides is shown to outperform openly available techniques. The second stage completes the procedure by providing automatic camera-laser calibration. The procedure has been verified by extensive experimental tests with the proposed algorithms providing a major reduction in time required from an operator in comparison to manual methods.
Resumo:
This document describes large, accurately calibrated and time-synchronised datasets, gathered in controlled environmental conditions, using an unmanned ground vehicle equipped with a wide variety of sensors. These sensors include: multiple laser scanners, a millimetre wave radar scanner, a colour camera and an infra-red camera. Full details of the sensors are given, as well as the calibration parameters needed to locate them with respect to each other and to the platform. This report also specifies the format and content of the data, and the conditions in which the data have been gathered. The data collection was made in two different situations of the vehicle: static and dynamic. The static tests consisted of sensing a fixed ’reference’ terrain, containing simple known objects, from a motionless vehicle. For the dynamic tests, data were acquired from a moving vehicle in various environments, mainly rural, including an open area, a semi-urban zone and a natural area with different types of vegetation. For both categories, data have been gathered in controlled environmental conditions, which included the presence of dust, smoke and rain. Most of the environments involved were static, except for a few specific datasets which involve the presence of a walking pedestrian. Finally, this document presents illustrations of the effects of adverse environmental conditions on sensor data, as a first step towards reliability and integrity in autonomous perceptual systems.
Resumo:
This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.
Resumo:
Many applications can benefit from the accurate surface temperature estimates that can be made using a passive thermal-infrared camera. However, the process of radiometric calibration which enables this can be both expensive and time consuming. An ad hoc approach for performing radiometric calibration is proposed which does not require specialized equipment and can be completed in a fraction of the time of the conventional method. The proposed approach utilizes the mechanical properties of the camera to estimate scene temperatures automatically, and uses these target temperatures to model the effect of sensor temperature on the digital output. A comparison with a conventional approach using a blackbody radiation source shows that the accuracy of the method is sufficient for many tasks requiring temperature estimation. Furthermore, a novel visualization method is proposed for displaying the radiometrically calibrated images to human operators. The representation employs an intuitive coloring scheme and allows the viewer to perceive a large variety of temperatures accurately.
Resumo:
This paper presents a new algorithm based on a Modified Particle Swarm Optimization (MPSO) to estimate the harmonic state variables in a distribution networks. The proposed algorithm performs the estimation for both amplitude and phase of each injection harmonic currents by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as the uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WTs). The main features of the proposed MPSO algorithm are usage of a primary and secondary PSO loop and applying the mutation function. The simulation results on 34-bus IEEE radial and a 70-bus realistic radial test networks are presented. The results demonstrate that the speed and the accuracy of the proposed Distribution Harmonic State Estimation (DHSE) algorithm are very excellent compared to the algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO, and Honey Bees Mating Optimization (HBMO).
Resumo:
A simple but accurate method for measuring the Earth’s radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of the sidereal day were used to calculate the radius of the Earth. The radius was measured as 6394.3 +/- 118 km, which is within 1.8% of the accepted average value of 6371 km and well within the experimental error. The experiment is suitable as a high school or university project and should produce a value for Earth’s radius within a few per cent at latitudes towards the equator, where at some times of the year the ecliptic is approximately normal to the horizon.
Resumo:
This paper presents a new algorithm based on a Hybrid Particle Swarm Optimization (PSO) and Simulated Annealing (SA) called PSO-SA to estimate harmonic state variables in distribution networks. The proposed algorithm performs estimation for both amplitude and phase of each harmonic currents injection by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WT). The main feature of proposed PSO-SA algorithm is to reach quickly around the global optimum by PSO with enabling a mutation function and then to find that optimum by SA searching algorithm. Simulation results on IEEE 34 bus radial and a realistic 70-bus radial test networks are presented to demonstrate the speed and accuracy of proposed Distribution Harmonic State Estimation (DHSE) algorithm is extremely effective and efficient in comparison with the conventional algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO and Honey Bees Mating Optimization (HBMO) algorithm.
Resumo:
To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.