824 resultados para Touch
Resumo:
As healthcare costs rise and an aging population makes an increased demand on services, so new techniques must be introduced to promote an individuals independence and provide these services. Robots can now be designed so they can alter their dynamic properties changing from stiff to flaccid, or from giving no resistance to movement, to damping any large and sudden movements. This has some strong implications in health care in particular for rehabilitation where a robot must work in conjunction with an individual, and might guiding or assist a persons arm movements, or might be commanded to perform some set of autonomous actions. This paper presents the state-of-the-art of rehabilitation robots with examples from prosthetics, aids for daily living and physiotherapy. In all these situations there is the potential for the interaction to be non-passive with a resulting potential for the human/machine/environment combination to become unstable. To understand this instability we must develop better models of the human motor system and fit these models with realistic parameters. This paper concludes with a discussion of this problem and overviews some human models that can be used to facilitate the design of the human/machine interfaces.
Resumo:
Perception of our own bodies is based on integration of visual and tactile inputs, notably by neurons in the brain’s parietal lobes. Here we report a behavioural consequence of this integration process. Simply viewing the arm can speed up reactions to an invisible tactile stimulus on the arm. We observed this visual enhancement effect only when a tactile task required spatial computation within a topographic map of the body surface and the judgements made were close to the limits of performance. This effect of viewing the body surface was absent or reversed in tasks that either did not require a spatial computation or in which judgements were well above performance limits. We consider possible mechanisms by which vision may influence tactile processing.
Resumo:
Although tactile representations of the two body sides are initially segregated into opposite hemispheres of the brain, behavioural interactions between body sides exist and can be revealed under conditions of tactile double simultaneous stimulation (DSS) at the hands. Here we examined to what extent vision can affect body side segregation in touch. To this aim, we changed hand-related visual input while participants performed a go/no-go task to detect a tactile stimulus delivered to one target finger (e.g., right index), stimulated alone or with a concurrent non-target finger either on the same hand (e.g., right middle finger) or on the other hand (e.g., left index finger = homologous; left middle finger = non-homologous). Across experiments, the two hands were visible or occluded from view (Experiment 1), images of the two hands were either merged using a morphing technique (Experiment 2), or were shown in a compatible vs incompatible position with respect to the actual posture (Experiment 3). Overall, the results showed reliable interference effects of DSS, as compared to target-only stimulation. This interference varied as a function of which non-target finger was stimulated, and emerged both within and between hands. These results imply that the competition between tactile events is not clearly segregated across body sides. Crucially, non-informative vision of the hand affected overall tactile performance only when a visual/proprioceptive conflict was present, while neither congruent nor morphed hand vision affected tactile DSS interference. This suggests that DSS operates at a tactile processing stage in which interactions between body sides can occur regardless of the available visual input from the body.
Resumo:
We studied the effect of tactile double simultaneous stimulation (DSS) within and between hands to examine spatial coding of touch at the fingers. Participants performed a go/no-go task to detect a tactile stimulus delivered to one target finger (e.g., right index), stimulated alone or with a concurrent non-target finger, either on the same hand (e.g., right middle finger) or on the other hand (e.g., left index finger=homologous; left middle finger=non-homologous). Across blocks we also changed the unseen hands posture (both hands palm down, or one hand rotated palm-up). When both hands were palm-down DSS interference effects emerged both within and between hands, but only when the non-homologous finger served as non-target. This suggests a clear segregation between the fingers of each hand, regardless of finger side. By contrast, when one hand was palm-up interference effects emerged only within hand, whereas between hands DSS interference was considerably reduced or absent. Thus, between hands interference was clearly affected by changes in hands posture. Taken together, these findings provide behavioral evidence in humans for multiple spatial coding of touch during tactile DSS at the fingers. In particular, they confirm the existence of representational stages of touch that distinguish between body-regions more than body-sides. Moreover, they show that the availability of tactile stimulation side becomes prominent when postural update is required.
Resumo:
Changes in the cultures and spaces of death during the Victorian era reveal the shifting conceptualisations and mobilisations of class in this period. Using the example of Brookwood Necropolis, established 1852 in response to the contemporary burial reform debate, the paper explores tensions within the sanitary reform movement, 1853–1903. Whilst reformist ideology grounded the cemetery's practices in a discourse of inclusion, one of the consequences of reform was to reinforce class distinctions. Combined with commercial imperatives and the modern impulse towards separation of living and dead, this aspect of reform enacted a counter-discourse of alienation. The presence of these conflicting strands in the spaces and practices of the Necropolis and their changes during the time period reflect wider urban trends.
Resumo:
We show that the affective experience of touch and the sight of touch can be modulated by cognition, and investigate in an fMRI study where top-down cognitive modulations of bottom-up somatosensory and visual processing of touch and its affective value occur in the human brain. The cognitive modulation was produced by word labels, 'Rich moisturizing cream' or 'Basic cream', while cream was being applied to the forearm, or was seen being applied to a forearm. The subjective pleasantness and richness were modulated by the word labels, as were the fMRI activations to touch in parietal cortex area 7, the insula and ventral striatum. The cognitive labels influenced the activations to the sight of touch and also the correlations with pleasantness in the pregenual cingulate/orbitofrontal cortex and ventral striatum. Further evidence of how the orbitofrontal cortex is involved in affective aspects of touch was that touch to the forearm [which has C fiber Touch (CT) afferents sensitive to light touch] compared with touch to the glabrous skin of the hand (which does not) revealed activation in the mid-orbitofrontal cortex. This is of interest as previous studies have suggested that the CT system is important in affiliative caress-like touch between individuals.
Resumo:
Prospective measurement of nutrition, cognition, and physical activity in later life would facilitate early detection of detrimental change and early intervention but is hard to achieve in community settings. Technology can simplify the task and facilitate daily data collection. The Novel Assessment of Nutrition and Ageing (NANA) toolkit was developed to provide a holistic picture of an individual's function including diet, cognition and activity levels. This study aimed to validate the NANA toolkit for data collection in the community. Forty participants aged 65 years and over trialled the NANA toolkit in their homes for three 7-day periods at four-week intervals. Data collected using the NANA toolkit were compared with standard measures of diet (four-day food diary), cognitive ability (processing speed) and physical activity (self-report). Bland–Altman analysis of dietary intake (energy, carbohydrates, protein fat) found a good relationship with the food diary and cognitive processing speed and physical activity (hours) were significantly correlated with their standard counterparts. The NANA toolkit enables daily reporting of data that would otherwise be collected sporadically while reducing demands on participants; older adults can complete the daily reporting at home without a researcher being present; and it enables prospective investigation of several domains at once
Resumo:
Recent work in animals suggests that the extent of early tactile stimulation by parents of offspring is an important element in early caregiving. We evaluate the psychometric properties of a new parent-report measure designed to assess frequency of tactile stimulation across multiple caregiving domains in infancy. We describe the full item set of the Parent-Infant Caregiving Touch Scale (PICTS) and, using data from a UK longitudinal Child Health and Development Study, the response frequencies and factor structure and whether it was invariant over two time points in early development (5 and 9 weeks). When their infant was 9 weeks old, 838 mothers responded on the PICTS while a stratified subsample of 268 mothers completed PICTS at an earlier 5 week old assessment (229 responded on both occasions). Three PICTS factors were identified reflecting stroking, holding and affective communication. These were moderately to strongly correlated at each of the two time points of interest and were unrelated to, and therefore distinct from, a traditional measure of maternal sensitivity at 7-months. A wholly stable psychometry over 5 and 9-week assessments was not identified which suggests that behavior profiles differ slightly for younger and older infants. Tests of measurement invariance demonstrated that all three factors are characterized by full configural and metric invariance, as well as a moderate degree of evidence of scalar invariance for the stroking factor. We propose the PICTS as a valuable new measure of important aspects of caregiving in infancy.
Resumo:
The aim of this study was to investigate if a telemetry test battery can be used to measure effects of Parkinson’s disease (PD) treatment intervention and disease progression in patients with fluctuations. Sixty-five patients diagnosed with advanced PD were recruited in an open longitudinal 36-month study; 35 treated with levodopa-carbidopa intestinal gel (LCIG) and 30 were candidates for switching from oral PD treatment to LCIG. They utilized a test battery, consisting of self-assessments of symptoms and fine motor tests (tapping and spiral drawings), four times per day in their homes during week-long test periods. The repeated measurements were summarized into an overall test score (OTS) to represent the global condition of the patient during a test period. Clinical assessments included ratings on Unified PD Rating Scale (UPDRS) and 39-item PD Questionnaire (PDQ-39) scales. In LCIG-naïve patients, mean OTS compared to baseline was significantly improved from the first test period on LCIG treatment until month 24. In LCIG-non-naïve patients, there were no significant changes in mean OTS until month 36. The OTS correlated adequately with total UPDRS (rho = 0.59) and total PDQ-39 (0.59). Responsiveness measured as effect size was 0.696 and 0.536 for OTS and UPDRS respectively. The trends of the test scores were similar to the trends of clinical rating scores but dropout rate was high. Correlations between OTS and clinical rating scales were adequate indicating that the test battery contains important elements of the information of well-established scales. The responsiveness and reproducibility were better for OTS than for total UPDRS.
Resumo:
In many creative and technical areas, professionals make use of paper sketches for developing and expressing concepts and models. Paper offers an almost constraint free environment where they have as much freedom to express themselves as they need. However, paper does have some disadvantages, such as size and not being able to manipulate the content (other than remove it or scratch it), which can be overcome by creating systems that can offer the same freedom people have from paper but none of the disadvantages and limitations. Only in recent years has the technology become massively available that allows doing precisely that, with the development in touch‐sensitive screens that also have the ability to interact with a stylus. In this project a prototype was created with the objective of finding a set of the most useful and usable interactions, which are composed of combinations of multi‐touch and pen. The project selected Computer Aided Software Engineering (CASE) tools as its application domain, because it addresses a solid and well‐defined discipline with still sufficient room for new developments. This was the result from the area research conducted to find an application domain, which involved analyzing sketching tools from several possible areas and domains. User studies were conducted using Model Driven Inquiry (MDI) to have a better understanding of the human sketch creation activities and concepts devised. Then the prototype was implemented, through which it was possible to execute user evaluations of the interaction concepts created. Results validated most interactions, in the face of limited testing only being possible at the time. Users had more problems using the pen, however handwriting and ink recognition were very effective, and users quickly learned the manipulations and gestures from the Natural User Interface (NUI).
Resumo:
This thesis argues on the possibility of supporting deictic gestures through handheld multi-touch devices in remote presentation scenarios. In [1], Clark distinguishes indicative techniques of placing-for and directing-to, where placing-for refers to placing a referent into the addressee’s attention, and directing-to refers to directing the addressee’s attention towards a referent. Keynote, PowerPoint, FuzeMeeting and others support placing-for efficiently with slide transitions, and animations, but support limited to none directing-to. The traditional “pointing feature” present in some presentation tools comes as a virtual laser pointer or mouse cursor. [12, 13] have shown that the mouse cursor and laser pointer offer very little informational expressiveness and do not do justice to human communicative gestures. In this project, a prototype application was implemented for the iPad in order to explore, develop, and test the concept of pointing in remote presentations. The prototype offers visualizing and navigating the slides as well as “pointing” and zooming. To further investigate the problem and possible solutions, a theoretical framework was designed representing the relationships between the presenter’s intention and gesture and the resulting visual effect (cursor) that enables the audience members to interpret the meaning of the effect and the presenter’s intention. Two studies were performed to investigate people’s appreciation of different ways of presenting remotely. An initial qualitative study was performed at The Hague, followed by an online quantitative user experiment. The results indicate that subjects found pointing to be helpful in understanding and concentrating, while the detached video feed of the presenter was considered to be distracting. The positive qualities of having the video feed were the emotion and social presence that it adds to the presentations. For a number of subjects, pointing displayed some of the same social and personal qualities [2] that video affords, while less intensified. The combination of pointing and video proved to be successful with 10-out-of-19 subjects scoring it the highest while pointing example came at a close 8-out-of-19. Video was the least preferred with only one subject preferring it. We suggest that the research performed here could provide a basis for future research and possibly be applied in a variety of distributed collaborative settings.
Resumo:
This project aimed to create a communication and interaction channel between Madeira Airport and its passengers. We used the pre-existent touch enabled screens at the terminal since their potential was not being utilised to their full capacity. To achieve our goal, we have followed an agile strategy to create a testable prototype and take advantages of its results. The developed prototype is based on a plugin architecture turning it into a maintainable and highly customisable system. The collected usage data suggests that we have achieved the initially defined goals. There is no doubt that this new interaction channel is an improvement regarding the provided services and, supported by the usage data, there is an opportunity to explore additional developments to the channel.
Resumo:
With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes.