70 resultados para Position Measurement
Resumo:
Back-focal-plane interferometry is used to measure displacements of optically trapped samples with very high spatial and temporal resolution. However, the technique is closely related to a method that measures the rate of change in light momentum. It has long been known that displacements of the interference pattern at the back focal plane may be used to track the optical force directly, provided that a considerable fraction of the light is effectively monitored. Nonetheless, the practical application of this idea has been limited to counter-propagating, low-aperture beams where the accurate momentum measurements are possible. Here, we experimentally show that the connection can be extended to single-beam optical traps. In particular, we show that, in a gradient trap, the calibration product κ·β (where κ is the trap stiffness and 1/β is the position sensitivity) corresponds to the factor that converts detector signals into momentum changes; this factor is uniquely determined by three construction features of the detection instrument and does not depend, therefore, on the specific conditions of the experiment. Then, we find that force measurements obtained from back-focal-plane displacements are in practice not restricted to a linear relationship with position and hence they can be extended outside that regime. Finally, and more importantly, we show that these properties are still recognizable even when the system is not fully optimized for light collection. These results should enable a more general use of back-focal-plane interferometry whenever the ultimate goal is the measurement of the forces exerted by an optical trap.
Resumo:
A reinforcement learning (RL) method was used to train a virtual character to move participants to a specified location. The virtual environment depicted an alleyway displayed through a wide field-of-view head-tracked stereo head-mounted display. Based on proxemics theory, we predicted that when the character approached within a personal or intimate distance to the participants, they would be inclined to move backwards out of the way. We carried out a between-groups experiment with 30 female participants, with 10 assigned arbitrarily to each of the following three groups: In the Intimate condition the character could approach within 0.38m and in the Social condition no nearer than 1.2m. In the Random condition the actions of the virtual character were chosen randomly from among the same set as in the RL method, and the virtual character could approach within 0.38m. The experiment continued in each case until the participant either reached the target or 7 minutes had elapsed. The distributions of the times taken to reach the target showed significant differences between the three groups, with 9 out of 10 in the Intimate condition reaching the target significantly faster than the 6 out of 10 who reached the target in the Social condition. Only 1 out of 10 in the Random condition reached the target. The experiment is an example of applied presence theory: we rely on the many findings that people tend to respond realistically in immersive virtual environments, and use this to get people to achieve a task of which they had been unaware. This method opens up the door for many such applications where the virtual environment adapts to the responses of the human participants with the aim of achieving particular goals.
Resumo:
This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion;(c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.
Resumo:
Aquest document és el resultat de contrastar el “know-how” dels nostres projectes relacionats amb els temes tractats pel Secretari General de l’ONU, Kofi Annan.
Resumo:
The present document should be seen as one more contribution to the debate to the reform processes and a small guide to these processes and their latest outcomes.
Resumo:
A reinforcement learning (RL) method was used to train a virtual character to move participants to a specified location. The virtual environment depicted an alleyway displayed through a wide field-of-view head-tracked stereo head-mounted display. Based on proxemics theory, we predicted that when the character approached within a personal or intimate distance to the participants, they would be inclined to move backwards out of the way. We carried out a between-groups experiment with 30 female participants, with 10 assigned arbitrarily to each of the following three groups: In the Intimate condition the character could approach within 0.38m and in the Social condition no nearer than 1.2m. In the Random condition the actions of the virtual character were chosen randomly from among the same set as in the RL method, and the virtual character could approach within 0.38m. The experiment continued in each case until the participant either reached the target or 7 minutes had elapsed. The distributions of the times taken to reach the target showed significant differences between the three groups, with 9 out of 10 in the Intimate condition reaching the target significantly faster than the 6 out of 10 who reached the target in the Social condition. Only 1 out of 10 in the Random condition reached the target. The experiment is an example of applied presence theory: we rely on the many findings that people tend to respond realistically in immersive virtual environments, and use this to get people to achieve a task of which they had been unaware. This method opens up the door for many such applications where the virtual environment adapts to the responses of the human participants with the aim of achieving particular goals.
Resumo:
Els sistemes híbrids de navegació integren mesures de posició i velocitat provinents de satèl·lits (GPS) i d’unitats de mesura inercials (IMU).Les dades d’aquests sensors s’han de fusionar i suavitzar, i per a aquest propòsit existeixen diversos algorismes de filtratge, que tracten les dades conjuntament o per separat. En aquest treball s’han codificat en Matlab els algorismes dels filtres de Kalman i IMM, i s’han comparat les seves prestacions en diverses trajectòries d’un vehicle. S’han avaluat quantitativament els errors dels dos filtres, i s’han sintonitzat els seus paràmetres per a minimitzar aquests errors. Amb una correcta sintonia dels filtres, s’ha comprovat que el filtre IMM és superior al filtre de Kalman, tant per maniobres brusques com per maniobres suaus, malgrat que la complexitat i el temps de càlcul requerit són majors.
Resumo:
This paper describes a mesurement system designed to register the displacement of the legs using a two-dimensional laser range sensor with a scanning plane parallel to the ground and extract gait parameters. In the proposed methodology, the position of the legs is estimated by fitting two circles with the laser points that define their contour and the gait parameters are extracted applying a step-line model to the estimated displacement of the legs to reduce uncertainty in the determination of the stand and swing phase of the gait. Results obtained in a range up to 8 m shows that the systematic error in the location of one static leg is lower than 10 mm with and standard deviation lower than 8 mm; this deviation increases to 11 mm in the case of a moving leg. The proposed measurement system has been applied to estimate the gait parameters of six volunteers in a preliminary walking experiment.
Resumo:
In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.
Resumo:
A network of twenty stakes was set up on Johnsons Glacier in order to determine its dynamics. During the austral summers from 1994-95 to 1997-98, we estimated surface velocities, mass balances and ice thickness variations. Horizontal velocity increased dow nstream from 1 m a- 1 near the ice divides to 40 m a- 1 near the ice terminus. The accumulation zone showed low accumulation rates (maximum of 0,6 m a- 1 (ice)), whereas in the lower part of the glacier, ablation rates were 4,3 m a- 1 (ice). Over the 3-year study period, both in the accumulation and ablation zones, we detected a reduction in the ice surface level ranging from 2 to 10 m from the annual ve rt ical velocities and ice-thinning data, the mass balance was obtained and compared with the mass balance field values, resulting in similar estimates. Flux values were calculated using cross-section data and horizontal velocities, and compared with the results obtained by means of mass balance and ice thinning data using the continuity equation. The two methods gave similar results.
Resumo:
Abstract Purpose: Several well-known managerial accounting performance measurement models rely on causal assumptions. Whilst users of the models express satisfaction and link them with improved organizational performance, academic research, of the realworld applications, shows few reliable statistical associations. This paper provides a discussion on the"problematic" of causality in a performance measurement setting. Design/methodology/approach: This is a conceptual study based on an analysis and synthesis of the literature from managerial accounting, organizational theory, strategic management and social scientific causal modelling. Findings: The analysis indicates that dynamic, complex and uncertain environments may challenge any reliance upon valid causal models. Due to cognitive limitations and judgmental biases, managers may fail to trace correct cause-and-effect understanding of the value creation in their organizations. However, even lacking this validity, causal models can support strategic learning and perform as organizational guides if they are able to mobilize managerial action. Research limitations/implications: Future research should highlight the characteristics necessary for elaboration of convincing and appealing causal models and the social process of their construction. Practical implications: Managers of organizations using causal models should be clear on the purposes of their particular models and their limitations. In particular, difficulties are observed in specifying detailed cause and effect relations and their potential for communicating and directing attention. They should therefore construct their models to suit the particular purpose envisaged. Originality/value: This paper provides an interdisciplinary and holistic view on the issue of causality in managerial accounting models.
Resumo:
We present experiments in which the laterally confined flow of a surfactant film driven by controlled surface tension gradients causes the subtended liquid layer to self-organize into an inner upstream microduct surrounded by the downstream flow. The anomalous interfacial flow profiles and the concomitant backflow are a result of the feedback between two-dimensional and three-dimensional microfluidics realized during flow in open microchannels. Bulk and surface particle image velocimetry data combined with an interfacial hydrodynamics model explain the dependence of the observed phenomena on channel geometry.
Resumo:
Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers) used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI) estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI) procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Resumo:
This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion;(c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.
Resumo:
Prompt production of charmonium χ c0, χ c1 and χ c2 mesons is studied using proton-proton collisions at the LHC at a centre-of-mass energy of TeX TeV. The χ c mesons are identified through their decay to J/ψγ, with J/ψ → μ + μ − using photons that converted in the detector. A data sample, corresponding to an integrated luminosity of 1.0 fb−1 collected by the LHCb detector, is used to measure the relative prompt production rate of χ c1 and χ c2 in the rapidity range 2.0 < y < 4.5 as a function of the J/ψ transverse momentum from 3 to 20 GeV/c. First evidence for χ c0 meson production at a high-energy hadron collider is also presented.