856 resultados para sensory acceptability
Resumo:
Robots currently recognise and use objects through algorithms that are hand-coded or specifically trained. Such robots can operate in known, structured environments but cannot learn to recognise or use novel objects as they appear. This thesis demonstrates that a robot can develop meaningful object representations by learning the fundamental relationship between action and change in sensory state; the robot learns sensorimotor coordination. Methods based on Markov Decision Processes are experimentally validated on a mobile robot capable of gripping objects, and it is found that object recognition and manipulation can be learnt as an emergent property of sensorimotor coordination.
Resumo:
Driving is often nominated as problematic by individuals with chronic whiplash associated disorders (WAD), yet driving-related performance has not been evaluated objectively. The purpose of this study was to test driving-related performance in persons with chronic WAD against healthy controls of similar age, gender and driving experience to determine if driving-related performance in the WAD group was sufficiently impaired to recommend fitness to drive assessment. Driving-related performance was assessed using an advanced driving simulator during three driving scenarios; freeway, residential and a central business district (CBD). Total driving duration was approximately 15 min. Five driving tasks which could cause a collision (critical events) were included in the scenarios. In addition, the effect of divided attention (identify red dots projected onto side or rear view mirrors) was assessed three times in each scenario. Driving performance was measured using the simulator performance index (SPI) which is calculated from 12 measures. z-Scores for all SPI measures were calculated for each WAD subject based on mean values of the control subjects. The z-scores were then averaged for the WAD group. A z-score of ≤−2 indicated a driving failing grade in the simulator. The number of collisions over the five critical events was compared between the WAD and control groups as was reaction time and missed response ratio in identifying the red dots. Seventeen WAD and 26 control subjects commenced the driving assessment. Demographic data were comparable between the groups. All subjects completed the freeway scenario but four withdrew during the residential and eight during the CBD scenario because of motion sickness. All scenarios were completed by 14 WAD and 17 control subjects. Mean z-scores for the SPI over the three scenarios was statistically lower in the WAD group (−0.3 ± 0.3; P < 0.05) but the score was not below the cut-off point for safe driving. There were no differences in the reaction time and missed response ratio in divided attention tasks between the groups (All P > 0.05). Assessment of driving in an advanced driving simulator for approximately 15 min revealed that driving-related performance in chronic WAD was not sufficiently impaired to recommend the need for fitness to drive assessment.
Resumo:
Lymphoedema following cancer treatment is characterized by swelling, and adversely influences mobility, function and quality of life. There is no cure, but without treatment lymphedema may progress. Since lymphedema treatment options are costly and time consuming, understanding the influence of these, and other potential barriers, on treatment adherence is vital in reducing the public health burden of lymphedema. Complex physical therapy and compression are supported by scientific evidence and patients also perceive these treatments as effective for improving symptoms and function. Multiple treatments may be required to treat all aspects of the condition. Patients and health professionals should consider effect and costs when identifying optimal treatment strategies.
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL) is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere) walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.
Resumo:
Dissociable processes for conscious perception (“what” processing) and guidance of action (“how” processing) have been identified in visual, auditory, and somatosensory systems. The present study was designed to find similar dissociation within whole-body movements in which the presence of vestibular information creates a unique perceptual condition. In two experiments, blindfolded participants walked along a linear path and specified the walked distance by verbally estimating it (“what” measure) and by pulling a length of tape that matched the walked distance (“how” measure). Although these two measures yielded largely comparable responses under a normal walking condition, variability in verbal estimates showed a qualitatively different pattern from that in tape-pulling when sensory input into walking was altered by having participants wear a heavy backpack. This suggests that the “what” versus “how” dissociation exists in whole-body movements as well, supporting a claim that it is a general principle with which perceptual systems are organized.
Resumo:
The aim of the research was two-fold: firstly, to investigate strategies used by Australian parents to encourage desirable child behaviours and to decrease undesirable behaviours; secondly, to determine the acceptability and perceived usefulness to parents of various strategies. The research encompassed two studies. In the first study, 152 parents of children aged under six years completed questionnaires to identify their disciplinary practices. In Study 2, 129 parents reported on the acceptability and perceived effectiveness of various parenting strategies (modelling, ignoring, rewarding and physical punishment) for influencing child behaviour. Most parents in Study 1 reported using techniques consistent with positive parenting strategies. The use of physical punishment was also reported, but predominantly as a secondary method of discipline. In Study 2, the techniques of modelling and rewarding were found to be more acceptable to parents than were ignoring and smacking. The findings highlight the need to raise parental awareness and acceptance of a broader range of positive ways to manage child behaviour.
Resumo:
In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies.
Resumo:
Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.
Resumo:
In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.
Resumo:
Neuropsychological tests requiring patients to find a path through a maze can be used to assess visuospatial memory performance in temporal lobe pathology, particularly in the hippocampus. Alternatively, they have been used as a task sensitive to executive function in patients with frontal lobe damage. We measured performance on the Austin Maze in patients with unilateral left and right temporal lobe epilepsy (TLE), with and without hippocampal sclerosis, compared to healthy controls. Performance was correlated with a number of other neuropsychological tests to identify the cognitive components that may be associated with poor Austin Maze performance. Patients with right TLE were significantly impaired on the Austin Maze task relative to patients with left TLE and controls, and error scores correlated with their performance on the Block Design task. The performance of patients with left TLE was also impaired relative to controls; however, errors correlated with performance on tests of executive function and delayed recall. The presence of hippocampal sclerosis did not have an impact on maze performance. A discriminant function analysis indicated that the Austin Maze alone correctly classified 73.5% of patients as having right TLE. In summary, impaired performance on the Austin Maze task is more suggestive of right than left TLE; however, impaired performance on this visuospatial task does not necessarily involve the hippocampus. The relationship of the Austin Maze task with other neuropsychological tests suggests that differential cognitive components may underlie performance decrements in right versus left TLE.
Resumo:
By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
This paper presents a novel method to rank map hypotheses by the quality of localization they afford. The highest ranked hypothesis at any moment becomes the active representation that is used to guide the robot to its goal location. A single static representation is insufficient for navigation in dynamic environments where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners. In our approach we simultaneously rank multiple map hypotheses by the influence that localization in each of them has on locally accurate odometry. This is done online for the current locally accurate window by formulating a factor graph of odometry relaxed by localization constraints. Comparison of the resulting perturbed odometry of each hypothesis with the original odometry yields a score that can be used to rank map hypotheses by their utility. We deploy the proposed approach on a real robot navigating a structurally noisy office environment. The configuration of the environment is physically altered outside the robots sensory horizon during navigation tasks to demonstrate the proposed approach of hypothesis selection.
Resumo:
Using mixed-methods, this research investigated why consumers engage in deviant behaviors. It found that there is significant variation in how consumers perceive right and wrong, which calls for more tailored deterrence strategies to challenge how consumers justify deviant behaviours. Specifically, individuals draw on a number of factors when assessing right and wrong. While individuals agree on the polar acceptable and unacceptable behaviours, behaviours in between are questionable. When social consensus varies on a behaviour's acceptability, so to do the predictors of deviant behaviour. These findings contribute to consumer deviance and consumer ethics research.