110 resultados para Perception naïve
Resumo:
Background: The overuse of antibiotics is becoming an increasing concern. Antibiotic resistance, which increases both the burden of disease, and the cost of health services, is perhaps the most profound impact of antibiotics overuse. Attempts have been made to develop instruments to measure the psychosocial constructs underlying antibiotics use, however, none of these instruments have undergone thorough psychometric validation. This study evaluates the psychometric properties of the Parental Perceptions on Antibiotics (PAPA) scales. The PAPA scales attempt to measure the factors influencing parental use of antibiotics in children. Methods: 1111 parents of children younger than 12 years old were recruited from primary schools’ parental meetings in the Eastern Province of Saudi Arabia from September 2012 to January 2013. The structure of the PAPA instrument was validated using Confirmatory Factor Analysis (CFA) with measurement model fit evaluated using the raw and scaled χ2, Goodness of Fit Index, and Root Mean Square Error of Approximation. Results: A five-factor model was confirmed with the model showing good fit. Constructs in the model include: Knowledge and Beliefs, Behaviors, Sources of information, Adherence, and Awareness about antibiotics resistance. The instrument was shown to have good internal consistency, and good discriminant and convergent validity. Conclusion: The availability of an instrument able to measure the psychosocial factors underlying antibiotics usage allows the risk factors underlying antibiotic use and overuse to now be investigated.
Resumo:
This paper presents an approach to promote the integrity of perception systems for outdoor unmanned ground vehicles (UGV) operating in challenging environmental conditions (presence of dust or smoke). The proposed technique automatically evaluates the consistency of the data provided by two sensing modalities: a 2D laser range finder and a millimetre-wave radar, allowing for perceptual failure mitigation. Experimental results, obtained with a UGV operating in rural environments, and an error analysis validate the approach.
Resumo:
This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke. An example of application is given with monocular SLAM estimating the pose of the UGV while smoke is present in the environment. It is shown that the proposed novel quality metric can be used to anticipate situations where the quality of the pose estimate will be significantly degraded due to the input image data. This leads to decisions of advantageously switching between data sources (e.g. using infrared images instead of visual images).
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke.
Resumo:
This paper presents large, accurately calibrated and time-synchronised datasets, gathered outdoors in controlled environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. It discusses how the data collection process was designed, the conditions in which these datasets have been gathered, and some possible outcomes of their exploitation, in particular for the evaluation of performance of sensors and perception algorithms for UGVs.
Resumo:
This document describes large, accurately calibrated and time-synchronised datasets, gathered in controlled environmental conditions, using an unmanned ground vehicle equipped with a wide variety of sensors. These sensors include: multiple laser scanners, a millimetre wave radar scanner, a colour camera and an infra-red camera. Full details of the sensors are given, as well as the calibration parameters needed to locate them with respect to each other and to the platform. This report also specifies the format and content of the data, and the conditions in which the data have been gathered. The data collection was made in two different situations of the vehicle: static and dynamic. The static tests consisted of sensing a fixed ’reference’ terrain, containing simple known objects, from a motionless vehicle. For the dynamic tests, data were acquired from a moving vehicle in various environments, mainly rural, including an open area, a semi-urban zone and a natural area with different types of vegetation. For both categories, data have been gathered in controlled environmental conditions, which included the presence of dust, smoke and rain. Most of the environments involved were static, except for a few specific datasets which involve the presence of a walking pedestrian. Finally, this document presents illustrations of the effects of adverse environmental conditions on sensor data, as a first step towards reliability and integrity in autonomous perceptual systems.
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.
Resumo:
This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.
Resumo:
This work aims to promote reliability and integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicle (UGV) autonomy. For this purpose, a comprehensive UGV system, comprising many different exteroceptive and proprioceptive sensors has been built. The first contribution of this work is a large, accurately calibrated and synchronised, multi-modal data-set, gathered in controlled environmental conditions, including the presence of dust, smoke and rain. The data have then been used to analyse the effects of such challenging conditions on perception and to identify common perceptual failures. The second contribution is a presentation of methods for mitigating these failures to promote perceptual integrity in adverse environmental conditions.
Resumo:
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.
Resumo:
Background How accurately do people perceive extreme water speeds and how does their perception affect perceived risk? Prior research has focused on the characteristics of moving water that can reduce human stability or balance. The current research presents the first experiment on people's perceptions of risk and moving water at different speeds and depths. Methods Using a randomized within-person 2 (water depth: 0.45, 0.90 m) ×3 (water speed: 0.4, 0.8, 1.2 m/s) experiment, we immersed 76 people in moving water and asked them to estimate water speed and the risk they felt. Results Multilevel modeling showed that people increasingly overestimated water speeds as actual water speeds increased or as water depth increased. Water speed perceptions mediated the direct positive relationship between actual water speeds and perceptions of risk; the faster the moving water, the greater the perceived risk. Participants' prior experience with rip currents and tropical cyclones moderated the strength of the actual–perceived water speed relationship; consequently, mediation was stronger for people who had experienced no rip currents or fewer storms. Conclusions These findings provide a clearer understanding of water speed and risk perception, which may help communicate the risks associated with anticipated floods and tropical cyclones.
Resumo:
Background How accurately do people perceive extreme wind speeds and how does that perception affect the perceived risk? Prior research on human–wind interaction has focused on comfort levels in urban settings or knock-down thresholds. No systematic experimental research has attempted to assess people's ability to estimate extreme wind speeds and perceptions of their associated risks. Method We exposed 76 people to 10, 20, 30, 40, 50, and 60 mph (4.5, 8.9, 13.4, 17.9, 22.3, and 26.8 m/s) winds in randomized orders and asked them to estimate wind speed and the corresponding risk they felt. Results Multilevel modeling showed that people were accurate at lower wind speeds but overestimated wind speeds at higher levels. Wind speed perceptions mediated the direct relationship between actual wind speeds and perceptions of risk (i.e., the greater the perceived wind speed, the greater the perceived risk). The number of tropical cyclones people had experienced moderated the strength of the actual–perceived wind speed relationship; consequently, mediation was stronger for people who had experienced fewer storms. Conclusion These findings provide a clearer understanding of wind and risk perception, which can aid development of public policy solutions toward communicating the severity and risks associated with natural disasters.
Resumo:
Working primarily within the natural landscape, this practice-led research project explored connections between the artist's visual and perceptual experience of a journey or place while simultaneously emphasizing the capacity for digital media to create a perceptual dissonance. By exploring concepts of time, viewpoint, duration of sequences and the manipulation of traditional constructs of stop-frame animation, the practical work created a cognitive awareness of the elements of the journey through optical sensations. The work allowed an opportunity to reflect on the nature of visual experience and its mediation through images. The project recontextualized the selected mediums of still photography, animation and projection within contemporary display modes of multiple screen installations by analysing relationships between the experienced and the perceived. The resulting works added to current discourse on the interstices between still and moving imagery in a digital world.
Resumo:
It has been shown that active control of locomotion increases accuracy and precision of nonvisual space perception, but psychological mechanisms of this enhancement are poorly understood. The present study explored a hypothesis that active control of locomotion enhances space perception by facilitating crossmodal interaction between visual and nonvisual spatial information. In an experiment, blindfolded participants walked along a linear path under one of the following two conditions: (1) They walked by themselves following a guide rope; and (2) they were led by an experimenter. Subsequently, they indicated the walked distance by tossing a beanbag to the origin of locomotion. The former condition gave participants greater control of their locomotion, and thus represented a more active walking condition. In addition, before each trial, half the participants viewed the room in which they performed the distance perception task. The other half remained blindfolded throughout the experiment. Results showed that although the room was devoid of any particular cues for walked distances, visual knowledge of the surroundings improved the precision of nonvisual distance perception. Importantly, however, the benefit of preview was observed only when participants walked more actively. This indicates that active control of locomotion allowed participants to better utilize their visual memory of the environment for perceiving nonvisually encoded distance, suggesting that active control of locomotion served as a catalyst for integrating visual and nonvisual information to derive spatial representations of higher quality.