201 resultados para Haptic
Resumo:
PURPOSE: To compare visual outcomes, rotational stability, and centration in a randomized controlled trial in patients undergoing cataract surgery who were bilaterally implanted with two different trifocal intraocular lenses (IOLs) with a similar optical zone but different haptic shape. METHODS: Twenty-one patients (42 eyes) with cataract and less than 1.50 D of corneal astigmatism underwent implantation of one FineVision/MicoF IOL in one eye and one POD FineVision IOL in the contralateral eye (PhysIOL, Liège, Belgium) at IOA Madrid Innova Ocular, Madrid, Spain. IOL allocation was random. Outcome measures, all evaluated 3 months postoperatively, included monocular and binocular uncorrected distance (UDVA), corrected distance (CDVA), distance-corrected intermediate (DCIVA), and near (DCNVA) visual acuity (at 80, 40, and 25 cm) under photopic conditions, refraction, IOL centration, haptic rotation, dysphotopsia, objective quality of vision and aberration quantification, patient satisfaction, and spectacle independence. RESULTS: Three months postoperatively, mean monocular UDVA, CDVA, DCIVA, and DCNVA (40 cm) under photopic conditions were 0.04 ± 0.07, 0.01 ± 0.04, 0.15 ± 0.11, and 0.16 ± 0.08 logMAR for the eyes implanted with the POD FineVision IOL and 0.03 ± 0.05, 0.01 ± 0.02, 0.17 ± 0.12, and 0.14 ± 0.08 logMAR for those receiving the FineVision/MicroF IOL. Moreover, the POD FineVision IOL showed similar centration (P > .05) and better rotational stability (P < .05) than the FineVision/MicroF IOL. Regarding halos, there was a minimal but statistically significant difference, obtaining better results with FineVision/MicroF. Full spectacle independence was reported by all patients. CONCLUSIONS: This study revealed similar visual outcomes for both trifocal IOLs under test (POD FineVision and FineVision/MicroF). However, the POD FineVision IOL showed better rotational stability, as afforded by its design.
Resumo:
PURPOSE: To evaluate and compare the visual, refractive, contrast sensitivity, and aberrometric outcomes with a diffractive bifocal and trifocal intraocular lens (IOL) of the same material and haptic design. METHODS: Sixty eyes of 30 patients undergoing bilateral cataract surgery were enrolled and randomly assigned to one of two groups: the bifocal group, including 30 eyes implanted with the bifocal diffractive IOL AT LISA 801 (Carl Zeiss Meditec, Jena, Germany), and the trifocal group, including eyes implanted with the trifocal diffractive IOL AT LISA tri 839 MP (Carl Zeiss Meditec). Analysis of visual and refractive outcomes, contrast sensitivity, ocular aberrations (OPD-Scan III; Nidek, Inc., Gagamori, Japan), and defocus curve were performed during a 3-month follow-up period. RESULTS: No statistically significant differences between groups were found in 3-month postoperative uncorrected and corrected distance visual acuity (P > .21). However, uncorrected, corrected, and distance-corrected near and intermediate visual acuities were significantly better in the trifocal group (P < .01). No significant differences between groups were found in postoperative spherical equivalent (P = .22). In the binocular defocus curve, the visual acuity was significantly better for defocus of -0.50 to -1.50 diopters in the trifocal group (P < .04) and -3.50 to -4.00 diopters in the bifocal group (P < .03). No statistically significant differences were found between groups in most of the postoperative corneal, internal, and ocular aberrations (P > .31), and in contrast sensitivity for most frequencies analyzed (P > .15). CONCLUSIONS: Trifocal diffractive IOLs provide significantly better intermediate vision over bifocal IOLs, with equivalent postoperative levels of visual and ocular optical quality.
Resumo:
Luminance changes within a scene are ambiguous; they can indicate reflectance changes, shadows, or shading due to surface undulations. How does vision distinguish between these possibilities? When a surface painted with an albedo texture is shaded, the change in local mean luminance (LM) is accompanied by a similar modulation of the local luminance amplitude (AM) of the texture. This relationship does not necessarily hold for reflectance changes or for shading of a relief texture. Here we concentrate on the role of AM in shape-from-shading. Observers were presented with a noise texture onto which sinusoidal LM and AM signals were superimposed, and were asked to indicate which of two marked locations was closer to them. Shape-from-shading was enhanced when LM and AM co-varied (in-phase), and was disrupted when they were out-of-phase. The perceptual differences between cue types (in-phase vs out-of-phase) were enhanced when the two cues were present at different orientations within a single image. Similar results were found with a haptic matching task. We conclude that vision can use AM to disambiguate luminance changes. LM and AM have a positive relationship for rendered, undulating, albedo textures, and we assess the degree to which this relationship holds in natural images. [Supported by EPSRC grants to AJS and MAG].
Resumo:
Spatial generalization skills in school children aged 8-16 were studied with regard to unfamiliar objects that had been previously learned in a cross-modal priming and learning paradigm. We observed a developmental dissociation with younger children recognizing objects only from previously learnt perspectives whereas older children generalized acquired object knowledge to new viewpoints as well. Haptic and - to a lesser extent - visual priming improved spatial generalization in all but the youngest children. The data supports the idea of dissociable, view-dependent and view-invariant object representations with different developmental trajectories that are subject to modulatory effects of priming. Late-developing areas in the parietal or the prefrontal cortex may account for the retarded onset of view-invariant object recognition. © 2006 Elsevier B.V. All rights reserved.
Resumo:
The human visual system is sensitive to second-order modulations of the local contrast (CM) or amplitude (AM) of a carrier signal. Second-order cues are detected independently of first-order luminance signals; however, it is not clear why vision should benet from second-order sensitivity. Analysis of the first-and second-order contents of natural images suggests that these cues tend to occur together, but their phase relationship varies. We have shown that in-phase combinations of LM and AM are perceived as a shaded corrugated surface whereas the anti-phase combination can be seen as corrugated when presented alone or as a flat material change when presented in a plaid containing the in-phase cue. We now extend these findings using new stimulus types and a novel haptic matching task. We also introduce a computational model based on initially separate first-and second-order channels that are combined within orientation and subsequently across orientation to produce a shading signal. Contrast gain control allows the LM + AM cue to suppress responses to the LM-AM when presented in a plaid. Thus, the model sees LM -AM as flat in these circumstances. We conclude that second-order vision plays a key role in disambiguating the origin of luminance changes within an image. © ARVO.
Resumo:
The objective of this thesis is to report the behaviour of mammalian cells with biocompatible synthetic polymers with potential for applications to the human body. Composite hydrogel materials were tested as possible keratoprosthetic devices. It was found that surface topography is an important consideration, pores, channels and fibres exposed on the surface of the hydrogels tested can have significant effects on the extent of cell adheson and proliferation. It is recommended that the core component is fabricated out of one of the following to provide a non cell adhesive base; A8, A11, A13, A22, A23. The haptic periphery fabricated out of one of the following would provide a cell adhesive composite; A16, A30, A33, A37, A38, A42, A43, A44. The presence of vitronectin in the ocular tissue appears to lead to higher cell adhesion to the posterior surface of a contact lens when compared to the anterior surface. Group IV contact lenses adhere more cells than Group II contact lenses - this may indicate that more protein (including vitronectin) is able to adhere to the contact lens due to the Group IV contact lenses high water content and ionic hydrogel matrix. Artificial lung surfactant analogues were found to be non cytotoxic but also decreased cell proliferation when tested at higher concentrations. Poly(lysine ethyl ester adipamide) [PLETESA] had the most favourable response on cell proliferation and commercial styrene/maleic anhydride (pMA/STY sp2) the most pronounced inhibitory response. The mode of action that decreases cell proliferation appears to be through membrane destabilization. Tissue culture well plates coated with PLETESA allowed cells to adhere in a concentration dependent manner, multilaminar liposomes possibly of PLETESA were observed in solution in PLETESA coated wells. Polyhydroxybutryate (PHB) and polyhydroxyvalerate (PHV) blends that contained hydroxyapatite were found to be the most cell adhesive material of those materials tested. The blends that were most susceptible to degradation adhered the most cells in initial stages of degradation. The initial slight increase in cell adhesion may be due to the increased rugosity of the material. As the degradation continued the number of cells adhering to the samples decreased, this may indicate that the polarity was inhibitory to cell adhesion during the later stages of degradation.
Resumo:
Accommodating Intraocular Lenses (IOLs), multifocal IOLs (MIOLs) and toric IOLs are designed to provide a greater level of spectacle independency post cataract surgery. All of these IOLs are reliant on the accurate calculation of intraocular lens power determined through reliable ocular biometry. A standardised defocus area metric and reading performance index metric were devised for the evaluation of the range of focus and the reading ability of subjects implanted with presbyopic correcting IOLs. The range of clear vision after implantation of an MIOL is extended by a second focal point; however, this results in the prevalence of dysphotopsia. A bespoke halometer was designed and validated to assess this photopic phenomenon. There is a lack of standardisation in the methods used for determining IOL orientation and thus rotation. A repeatable, objective method was developed to allow the accurate assessment of IOL rotation, which was used to determine the rotational and positional stability of a closed loop haptic IOL. A new commercially available biometry device was validated for use with subjects prior to cataract surgery. The optical low coherence reflectometry instrument proved to be a valid method for assessing ocular biometry and covered a wider range of ocular parameters in comparison with previous instruments. The advantages of MIOLs were shown to include an extended range of clear vision translating into greater reading ability. However, an increased prevalence of dysphotopsia was shown with a bespoke halometer, which was dependent on the MIOL optic design. Implantation of a single optic accommodating IOL did not improve reading ability but achieved high subjective ratings of near vision. The closed-loop haptic IOL displayed excellent rotational stability in the late period but relatively poor rotational stability in the early period post implantation. The orientation error was compounded by the high frequency of positional misalignment leading to an extensive overall misalignment of the IOL. This thesis demonstrates the functionality of new IOL lens designs and the importance of standardised testing methods, thus providing a greater understanding of the consequences of implanting these IOLs. Consequently, the findings of the thesis will influence future designs of IOLs and testing methods.
Resumo:
Spatial objects may not only be perceived visually but also by touch. We report recent experiments investigating to what extent prior object knowledge acquired in either the haptic or visual sensory modality transfers to a subsequent visual learning task. Results indicate that even mental object representations learnt in one sensory modality may attain a multi-modal quality. These findings seem incompatible with picture-based reasoning schemas but leave open the possibility of modality-specific reasoning mechanisms.
Resumo:
There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes). © 2013 Elsevier Inc.
Resumo:
Data Visualization is widely used to facilitate the comprehension of information and find relationships between data. One of the most widely used techniques for multivariate data (4 or more variables) visualization is the 2D scatterplot. This technique associates each data item to a visual mark in the following way: two variables are mapped to Cartesian coordinates so that a visual mark can be placed on the Cartesian plane; the others variables are mapped gradually to visual properties of the mark, such as size, color, shape, among others. As the number of variables to be visualized increases, the amount of visual properties associated to the mark increases as well. As a result, the complexity of the final visualization is higher. However, increasing the complexity of the visualization does not necessarily implies a better visualization and, sometimes, it provides an inverse situation, producing a visually polluted and confusing visualization—this problem is called visual properties overload. This work aims to investigate whether it is possible to work around the overload of the visual channel and improve insight about multivariate data visualized through a modification in the 2D scatterplot technique. In this modification, we map the variables from data items to multisensoriy marks. These marks are composed not only by visual properties, but haptic properties, such as vibration, viscosity and elastic resistance, as well. We believed that this approach could ease the insight process, through the transposition of properties from the visual channel to the haptic channel. The hypothesis was verified through experiments, in which we have analyzed (a) the accuracy of the answers; (b) response time; and (c) the grade of personal satisfaction with the proposed approach. However, the hypothesis was not validated. The results suggest that there is an equivalence between the investigated visual and haptic properties in all analyzed aspects, though in strictly numeric terms the multisensory visualization achieved better results in response time and personal satisfaction.
Resumo:
Loss of limb results in loss of function and a partial loss of freedom. A powered prosthetic device can partially assist an individual with everyday tasks and therefore return some level of independence. Powered upper limb prostheses are often controlled by the user generating surface electromyographic (SEMG) signals. The goal of this thesis is to develop a virtual environment in which a user can control a virtual hand to safely grasp representations of everyday objects using EMG signals from his/her forearm muscles, and experience visual and vibrotactile feedback relevant to the grasping force in the process. This can then be used to train potential wearers of real EMG controlled prostheses, with or without vibrotactile feedback. To test this system an experiment was designed and executed involving ten subjects, twelve objects, and three feedback conditions. The tested feedback conditions were visual, vibrotactile, and both visual and vibrotactile. In each experimental exercise the subject attempted to grasp a virtual object on the screen using the virtual hand controlled by EMG electrodes placed on his/her forearm. Two metrics were used: score, and time to task completion, where score measured grasp dexterity. It was hypothesized that with the introduction of vibrotactile feedback, dexterity, and therefore score, would improve and time to task completion would decrease. Results showed that time to task completion increased, and score did not improve with vibrotactile feedback. Details on the developed system, the experiment, and the results are presented in this thesis.
Resumo:
Ce mémoire présente la conception, le contrôle et la validation expérimentale d’une boussole haptique servant à diriger les utilisateurs aux prises avec une déficience visuelle, et ce, dans tous les environnements. La revue de littérature décrit le besoin pour un guidage haptique et permet de mettre en perspective cette technologie dans le marché actuel. La boussole proposée utilise le principe de couples asymétriques. Son design est basé sur une architecture de moteur à entraînement direct et un contrôle en boucle ouverte étalonné au préalable. Cette conception permet d’atteindre une vaste plage de fréquences pour la rétroaction haptique. Les propriétés mécaniques de l’assemblage sont évaluées. Puis, l’étalonnage des couples permet d’assurer que le contrôle en boucle ouverte produit des couples avec une précision suffisante. Un premier test avec des utilisateurs a permis d’identifier que les paramètres de fréquence entre 5 et 15 Hz combinés avec des couples au-delà de 40 mNm permettent d’atteindre une efficacité intéressante pour la tâche. L’expérience suivante démontre qu’utiliser une rétroaction haptique proportionnelle à l’erreur d’orientation améliore significativement les performances. Le concept est ensuite éprouvé avec dix-neuf sujets qui doivent se diriger sur un parcours avec l’aide seule de cette boussole haptique. Les résultats montrent que tous les sujets ont réussi à rencontrer tous les objectifs de la route, tout en maintenant des déviations latérales relativement faibles (0:39 m en moyenne). Les performances obtenues et les impressions des utilisateurs sont prometteuses et plaident en faveur de ce dispositif. Pour terminer, un modèle simplifié du comportement d’un individu pour la tâche d’orientation est développé et démontre l’importance de la personnalisation de l’appareil. Ce modèle est ensuite utilisé pour mettre en valeur la stratégie d’horizon défilant pour le placement de la cible intermédiaire actuelle dans un parcours sur une longue distance.
Resumo:
Current Ambient Intelligence and Intelligent Environment research focuses on the interpretation of a subject’s behaviour at the activity level by logging the Activity of Daily Living (ADL) such as eating, cooking, etc. In general, the sensors employed (e.g. PIR sensors, contact sensors) provide low resolution information. Meanwhile, the expansion of ubiquitous computing allows researchers to gather additional information from different types of sensor which is possible to improve activity analysis. Based on the previous research about sitting posture detection, this research attempts to further analyses human sitting activity. The aim of this research is to use non-intrusive low cost pressure sensor embedded chair system to recognize a subject’s activity by using their detected postures. There are three steps for this research, the first step is to find a hardware solution for low cost sitting posture detection, second step is to find a suitable strategy of sitting posture detection and the last step is to correlate the time-ordered sitting posture sequences with sitting activity. The author initiated a prototype type of sensing system called IntelliChair for sitting posture detection. Two experiments are proceeded in order to determine the hardware architecture of IntelliChair system. The prototype looks at the sensor selection and integration of various sensor and indicates the best for a low cost, non-intrusive system. Subsequently, this research implements signal process theory to explore the frequency feature of sitting posture, for the purpose of determining a suitable sampling rate for IntelliChair system. For second and third step, ten subjects are recruited for the sitting posture data and sitting activity data collection. The former dataset is collected byasking subjects to perform certain pre-defined sitting postures on IntelliChair and it is used for posture recognition experiment. The latter dataset is collected by asking the subjects to perform their normal sitting activity routine on IntelliChair for four hours, and the dataset is used for activity modelling and recognition experiment. For the posture recognition experiment, two Support Vector Machine (SVM) based classifiers are trained (one for spine postures and the other one for leg postures), and their performance evaluated. Hidden Markov Model is utilized for sitting activity modelling and recognition in order to establish the selected sitting activities from sitting posture sequences.2. After experimenting with possible sensors, Force Sensing Resistor (FSR) is selected as the pressure sensing unit for IntelliChair. Eight FSRs are mounted on the seat and back of a chair to gather haptic (i.e., touch-based) posture information. Furthermore, the research explores the possibility of using alternative non-intrusive sensing technology (i.e. vision based Kinect Sensor from Microsoft) and find out the Kinect sensor is not reliable for sitting posture detection due to the joint drifting problem. A suitable sampling rate for IntelliChair is determined according to the experiment result which is 6 Hz. The posture classification performance shows that the SVM based classifier is robust to “familiar” subject data (accuracy is 99.8% with spine postures and 99.9% with leg postures). When dealing with “unfamiliar” subject data, the accuracy is 80.7% for spine posture classification and 42.3% for leg posture classification. The result of activity recognition achieves 41.27% accuracy among four selected activities (i.e. relax, play game, working with PC and watching video). The result of this thesis shows that different individual body characteristics and sitting habits influence both sitting posture and sitting activity recognition. In this case, it suggests that IntelliChair is suitable for individual usage but a training stage is required.
Resumo:
Users need to be able to address in-air gesture systems, which means finding where to perform gestures and how to direct them towards the intended system. This is necessary for input to be sensed correctly and without unintentionally affecting other systems. This thesis investigates novel interaction techniques which allow users to address gesture systems properly, helping them find where and how to gesture. It also investigates audio, tactile and interactive light displays for multimodal gesture feedback; these can be used by gesture systems with limited output capabilities (like mobile phones and small household controls), allowing the interaction techniques to be used by a variety of device types. It investigates tactile and interactive light displays in greater detail, as these are not as well understood as audio displays. Experiments 1 and 2 explored tactile feedback for gesture systems, comparing an ultrasound haptic display to wearable tactile displays at different body locations and investigating feedback designs. These experiments found that tactile feedback improves the user experience of gesturing by reassuring users that their movements are being sensed. Experiment 3 investigated interactive light displays for gesture systems, finding this novel display type effective for giving feedback and presenting information. It also found that interactive light feedback is enhanced by audio and tactile feedback. These feedback modalities were then used alongside audio feedback in two interaction techniques for addressing gesture systems: sensor strength feedback and rhythmic gestures. Sensor strength feedback is multimodal feedback that tells users how well they can be sensed, encouraging them to find where to gesture through active exploration. Experiment 4 found that they can do this with 51mm accuracy, with combinations of audio and interactive light feedback leading to the best performance. Rhythmic gestures are continuously repeated gesture movements which can be used to direct input. Experiment 5 investigated the usability of this technique, finding that users can match rhythmic gestures well and with ease. Finally, these interaction techniques were combined, resulting in a new single interaction for addressing gesture systems. Using this interaction, users could direct their input with rhythmic gestures while using the sensor strength feedback to find a good location for addressing the system. Experiment 6 studied the effectiveness and usability of this technique, as well as the design space for combining the two types of feedback. It found that this interaction was successful, with users matching 99.9% of rhythmic gestures, with 80mm accuracy from target points. The findings show that gesture systems could successfully use this interaction technique to allow users to address them. Novel design recommendations for using rhythmic gestures and sensor strength feedback were created, informed by the experiment findings.
Resumo:
When a task must be executed in a remote or dangerous environment, teleoperation systems may be employed to extend the influence of the human operator. In the case of manipulation tasks, haptic feedback of the forces experienced by the remote (slave) system is often highly useful in improving an operator's ability to perform effectively. In many of these cases (especially teleoperation over the internet and ground-to-space teleoperation), substantial communication latency exists in the control loop and has the strong tendency to cause instability of the system. The first viable solution to this problem in the literature was based on a scattering/wave transformation from transmission line theory. This wave transformation requires the designer to select a wave impedance parameter appropriate to the teleoperation system. It is widely recognized that a small value of wave impedance is well suited to free motion and a large value is preferable for contact tasks. Beyond this basic observation, however, very little guidance exists in the literature regarding the selection of an appropriate value. Moreover, prior research on impedance selection generally fails to account for the fact that in any realistic contact task there will simultaneously exist contact considerations (perpendicular to the surface of contact) and quasi-free-motion considerations (parallel to the surface of contact). The primary contribution of the present work is to introduce an approximate linearized optimum for the choice of wave impedance and to apply this quasi-optimal choice to the Cartesian reality of such a contact task, in which it cannot be expected that a given joint will be either perfectly normal to or perfectly parallel to the motion constraint. The proposed scheme selects a wave impedance matrix that is appropriate to the conditions encountered by the manipulator. This choice may be implemented as a static wave impedance value or as a time-varying choice updated according to the instantaneous conditions encountered. A Lyapunov-like analysis is presented demonstrating that time variation in wave impedance will not violate the passivity of the system. Experimental trials, both in simulation and on a haptic feedback device, are presented validating the technique. Consideration is also given to the case of an uncertain environment, in which an a priori impedance choice may not be possible.