189 resultados para visual function impairment
Resumo:
Objective--To determine whether heart failure with preserved systolic function (HFPSF) has different natural history from left ventricular systolic dysfunction (LVSD). Design and setting--A retrospective analysis of 10 years of data (for patients admitted between 1 July 1994 and 30 June 2004, and with a study census date of 30 June 2005) routinely collected as part of clinical practice in a large tertiary referral hospital.Main outcome measures-- Sociodemographic characteristics, diagnostic features, comorbid conditions, pharmacotherapies, readmission rates and survival.Results--Of the 2961 patients admitted with chronic heart failure, 753 had echocardiograms available for this analysis. Of these, 189 (25%) had normal left ventricular size and systolic function. In comparison to patients with LVSD, those with HFPSF were more often female (62.4% v 38.5%; P = 0.001), had less social support, and were more likely to live in nursing homes (17.9% v 7.6%; P < 0.001), and had a greater prevalence of renal impairment (86.7% v 6.2%; P = 0.004), anaemia (34.3% v 6.3%; P = 0.013) and atrial fibrillation (51.3% v 47.1%; P = 0.008), but significantly less ischaemic heart disease (53.4% v 81.2%; P = 0.001). Patients with HFPSF were less likely to be prescribed an angiotensin-converting enzyme inhibitor (61.9% v 72.5%; P = 0.008); carvedilol was used more frequently in LVSD (1.5% v 8.8%; P < 0.001). Readmission rates were higher in the HFPSF group (median, 2 v 1.5 admissions; P = 0.032), particularly for malignancy (4.2% v 1.8%; P < 0.001) and anaemia (3.9% v 2.3%; P < 0.001). Both groups had the same poor survival rate (P = 0.912). Conclusions--Patients with HFPSF were predominantly older women with less social support and higher readmission rates for associated comorbid illnesses. We therefore propose that reduced survival in HFPSF may relate more to comorbid conditions than suboptimal cardiac management.
Resumo:
OBJECTIVES: To investigate the effects of hearing impairment and distractibility on older people's driving ability, assessed under real-world conditions. DESIGN: Experimental cross-sectional study. SETTING: University laboratory setting and an on-road driving test. PARTICIPANTS: One hundred seven community-living adults aged 62 to 88. Fifty-five percent had normal hearing, 26% had a mild hearing impairment, and 19% had a moderate or greater impairment. ---------- MEASUREMENTS: Hearing was assessed using objective impairment measures (pure-tone audiometry, speech perception testing) and a self-report measure (Hearing Handicap Inventory for the Elderly). Driving was assessed on a closed road circuit under three conditions: no distracters, auditory distracters, and visual distracters. RESULTS: There was a significant interaction between hearing impairment and distracters, such that people with moderate to severe hearing impairment had significantly poorer driving performance in the presence of distracters than those with normal or mild hearing impairment. CONCLUSION: Older adults with poor hearing have greater difficulty with driving in the presence of distracters than older adults with good hearing.
Resumo:
Purpose. To investigate evidence-based visual field size criteria for referral of low-vision (LV) patients for mobility rehabilitation. Methods. One hundred and nine participants with LV and 41 age-matched participants with normal sight (NS) were recruited. The LV group was heterogeneous with diverse causes of visual impairment. We measured binocular kinetic visual fields with the Humphrey Field Analyzer and mobility performance on an obstacle-rich, indoor course. Mobility was assessed as percent preferred walking speed (PPWS) and number of obstacle-contact errors. The weighted kappa coefficient of association (κr) was used to discriminate LV participants with both unsafe and inefficient mobility from those with adequate mobility on the basis of their visual field size for the full sample and for subgroups according to type of visual field loss and whether or not the participants had previously received orientation and mobility training. Results. LV participants with both PPWS <38% and errors >6 on our course were classified as having inadequate (inefficient and unsafe) mobility compared with NS participants. Mobility appeared to be first compromised when the visual field was less than about 1.2 steradians (sr; solid angle of a circular visual field of about 70° diameter). Visual fields <0.23 and 0.63 sr (31 to 52° diameter) discriminated patients with at-risk mobility for the full sample and across the two subgroups. A visual field of 0.05 sr (15° diameter) discriminated those with critical mobility. Conclusions. Our study suggests that: practitioners should be alert to potential mobility difficulties when the visual field is less than about 1.2 sr (70° diameter); assessment for mobility rehabilitation may be warranted when the visual field is constricted to about 0.23 to 0.63 sr (31 to 52° diameter) depending on the nature of their visual field loss and previous history (at risk); and mobility rehabilitation should be conducted before the visual field is constricted to 0.05 sr (15° diameter; critical).
Resumo:
Background: This study investigated the effects of experimentally induced visual impairment, headlamp glare and clothing on pedestrian visibility. Methods: 28 young adults (M=27.6±4.7 yrs) drove around a closed road circuit at night while pedestrians walked in place at the roadside. Pedestrians wore either black clothing, black clothing with a rectangular vest consisting of 1325 cm2 of retroreflective tape, or the same amount of tape positioned on the extremities in a configuration that conveyed biological motion (“biomotion”). Visual impairment was induced by goggles containing either blurring lenses, simulated cataracts, or clear lenses; visual acuity for the cataract and blurred lens conditions was matched. Drivers pressed a response pad when they first recognized that a pedestrian was present. Sixteen participants drove around the circuit in the presence of headlamp glare while twelve drove without glare. Results: Visual impairment, headlamp glare and pedestrian clothing all significantly affected drivers’ ability to recognize pedestrians (p<0.05). The simulated cataracts were more disruptive than blur, even though acuity was matched across the two manipulations. Pedestrians were recognized more often and at longer distances when they wore “biomotion” clothing than either the vest or black clothing, even in the presence of visual impairment and glare. Conclusions: Drivers’ ability to see and respond to pedestrians at night is degraded by modest visual impairments even when vision meets driver licensing requirements; glare further exacerbates these effects. Clothing that includes retroreflective tape in a biological motion configuration is relatively robust to visual impairment and glare.
Resumo:
Purpose: To determine the effect of moderate levels of refractive blur and simulated cataracts on nighttime pedestrian conspicuity in the presence and absence of headlamp glare. Methods: The ability to recognize pedestrians at night was measured in 28 young adults (M=27.6 years) under three visual conditions: normal vision, refractive blur and simulated cataracts; mean acuity was 20/40 or better in all conditions. Pedestrian recognition distances were recorded while participants drove an instrumented vehicle along a closed road course at night. Pedestrians wore one of three clothing conditions and oncoming headlamps were present for 16 participants and absent for 12 participants. Results: Simulated visual impairment and glare significantly reduced the frequency with which drivers recognized pedestrians and the distance at which the drivers first recognized them. Simulated cataracts were significantly more disruptive than blur even though photopic visual acuity levels were matched. With normal vision, drivers responded to pedestrians at 3.6x and 5.5x longer distances on average than for the blur or cataract conditions, respectively. Even in the presence of visual impairment and glare, pedestrians were recognized more often and at longer distances when they wore a “biological motion” reflective clothing configuration than when they wore a reflective vest or black clothing. Conclusions: Drivers’ ability to recognize pedestrians at night is degraded by common visual impairments even when the drivers’ mean visual acuity meets licensing requirements. To maximize drivers’ ability to see pedestrians, drivers should wear their optimum optical correction, and cataract surgery should be performed early enough to avoid potentially dangerous reductions in visual performance.
Resumo:
Driving and using prescription medicines that have the potential to impair driving is an emerging research area. To date it is characterised by a limited (although growing) number of studies and methodological complexities that make generalisations about impairment due to medications difficult. Consistent evidence has been found for the impairing effects of hypnotics, sedative antidepressants and antihistamines, and narcotic analgesics, although it has been estimated that as many as nine medication classes have the potential to impair driving (Alvarez & del Rio, 2000; Walsh, de Gier, Christopherson, & Verstraete, 2004). There is also evidence for increased negative effects related to concomitant use of other medications and alcohol (Movig et al., 2004; Pringle, Ahern, Heller, Gold, & Brown, 2005). Statistics on the high levels of Australian prescription medication use suggest that consumer awareness of driving impairment due to medicines should be examined. One web-based study has found a low level of awareness, knowledge and risk perceptions among Australian drivers about the impairing effects of various medications on driving (Mallick, Johnston, Goren, & Kennedy, 2007). The lack of awareness and knowledge brings into question the effectiveness of the existing countermeasures. In Australia these consist of the use of ancillary warning labels administered under mandatory regulation and professional guidelines, advice to patients, and the use of Consumer Medicines Information (CMI) with medications that are known to cause impairment. The responsibility for the use of the warnings and related counsel to patients primarily lies with the pharmacist when dispensing relevant medication. A review by the Therapeutic Goods Administration (TGA) noted that in practice, advice to patients may not occur and that CMI is not always available (TGA, 2002). Researchers have also found that patients' recall of verbal counsel is very low (Houts, Bachrach, Witmer, Tringali, Bucher, & Localio, 1998). With healthcare observed as increasingly being provided in outpatient conditions (Davis et al., 2006; Vingilis & MacDonald, 2000), establishing the effectiveness of the warning labels as a countermeasure is especially important. There have been recent international developments in medication categorisation systems and associated medication warning labels. In 2005, France implemented a four-tier medication categorisation and warning system to improve patients' and health professionals' awareness and knowledge of related road safety issues (AFSSAPS, 2005). This warning system uses a pictogram and indicates the level of potential impairment in relation to driving performance through the use of colour and advice on the recommended behaviour to adopt towards driving. The comparable Australian system does not indicate the severity level of potential effects, and does not provide specific guidelines on the attitude or actions that the individual should adopt towards driving. It is reliant upon the patient to be vigilant in self-monitoring effects, to understand the potential ways in which they may be affected and how serious these effects may be, and to adopt the appropriate protective actions. This thesis investigates the responses of a sample of Australian hospital outpatients who receive appropriate labelling and counselling advice about potential driving impairment due to prescribed medicines. It aims to provide baseline data on the understanding and use of relevant medications by a Queensland public hospital outpatient sample recruited through the hospital pharmacy. It includes an exploration and comparison of the effect of the Australian and French medication warning systems on medication user knowledge, attitudes, beliefs and behaviour, and explores whether there are areas in which the Australian system may be improved by including any beneficial elements of the French system. A total of 358 outpatients were surveyed, and a follow-up telephone survey was conducted with a subgroup of consenting participants who were taking at least one medication that required an ancillary warning label about driving impairment. A complementary study of 75 French hospital outpatients was also conducted to further investigate the performance of the warnings. Not surprisingly, medication use among the Australian outpatient sample was high. The ancillary warning labels required to appear on medications that can impair driving were prevalent. A subgroup of participants was identified as being potentially at-risk of driving impaired, based on their reported recent use of medications requiring an ancillary warning label and level of driving activity. The sample reported previous behaviour and held future intentions that were consistent with warning label advice and health protective action. Participants did not express a particular need for being advised by a health professional regarding fitness to drive in relation to their medication. However, it was also apparent from the analysis that the participants would be significantly more likely to follow advice from a doctor than a pharmacist. High levels of knowledge in terms of general principles about effects of alcohol, illicit drugs and combinations of substances, and related health and crash risks were revealed. This may reflect a sample specific effect. Emphasis is placed in the professional guidelines for hospital pharmacists that make it essential that advisory labels are applied to medicines where applicable and that warning advice is given to all patients on medication which may affect driving (SHPA, 2006, p. 221). The research program applied selected theoretical constructs from Schwarzer's (1992) Health Action Process Approach, which has extended constructs from existing health theories such as the Theory of Planned Behavior (Ajzen, 1991) to better account for the intention-behaviour gap often observed when predicting behaviour. This was undertaken to explore the utility of the constructs in understanding and predicting compliance intentions and behaviour with the mandatory medication warning about driving impairment. This investigation revealed that the theoretical constructs related to intention and planning to avoid driving if an effect from the medication was noticed were useful. Not all the theoretical model constructs that had been demonstrated to be significant predictors in previous research on different health behaviours were significant in the present analyses. Positive outcome expectancies from avoiding driving were found to be important influences on forming the intention to avoid driving if an effect due to medication was noticed. In turn, intention was found to be a significant predictor of planning. Other selected theoretical constructs failed to predict compliance with the Australian warning label advice. It is possible that the limited predictive power of a number of constructs including risk perceptions is due to the small sample size obtained at follow up on which the evaluation is based. Alternately, it is possible that the theoretical constructs failed to sufficiently account for issues of particular relevance to the driving situation. The responses of the Australian hospital outpatient sample towards the Australian and French medication warning labels, which differed according to visual characteristics and warning message, were examined. In addition, a complementary study with a sample of French hospital outpatients was undertaken in order to allow general comparisons concerning the performance of the warnings. While a large amount of research exists concerning warning effectiveness, there is little research that has specifically investigated medication warnings relating to driving impairment. General established principles concerning factors that have been demonstrated to enhance warning noticeability and behavioural compliance have been extrapolated and investigated in the present study. The extent to which there is a need for education and improved health messages on this issue was a core issue of investigation in this thesis. Among the Australian sample, the size of the warning label and text, and red colour were the most visually important characteristics. The pictogram used in the French labels was also rated highly, and was salient for a large proportion of the sample. According to the study of French hospital outpatients, the pictogram was perceived to be the most important visual characteristic. Overall, the findings suggest that the Australian approach of using a combination of visual characteristics was important for the majority of the sample but that the use of a pictogram could enhance effects. A high rate of warning recall was found overall and a further important finding was that higher warning label recall was associated with increased number of medication classes taken. These results suggest that increased vigilance and care are associated with the number of medications taken and the associated repetition of the warning message. Significantly higher levels of risk perception were found for the French Level 3 (highest severity) label compared with the comparable mandatory Australian ancillary Label 1 warning. Participants' intentions related to the warning labels indicated that they would be more cautious while taking potentially impairing medication displaying the French Level 3 label compared with the Australian Label 1. These are potentially important findings for the Australian context regarding the current driving impairment warnings about displayed on medication. The findings raise other important implications for the Australian labelling context. An underlying factor may be the differences in the wording of the warning messages that appear on the Australian and French labels. The French label explicitly states "do not drive" while the Australian label states "if affected, do not drive", and the difference in responses may reflect that less severity is perceived where the situation involves the consumer's self-assessment of their impairment. The differences in the assignment of responsibility by the Australian (the consumer assesses and decides) and French (the doctor assesses and decides) approaches for the decision to drive while taking medication raises the core question of who is most able to assess driving impairment due to medication: the consumer, or the health professional? There are pros and cons related to knowledge, expertise and practicalities with either option. However, if the safety of the consumer is the primary aim, then the trend towards stronger risk perceptions and more consistent and cautious behavioural intentions in relation to the French label suggests that this approach may be more beneficial for consumer safety. The observations from the follow-up survey, although based on a small sample size and descriptive in nature, revealed that just over half of the sample recalled seeing a warning label about driving impairment on at least one of their medications. The majority of these respondents reported compliance with the warning advice. However, the results indicated variation in responses concerning alcohol intake and modifying the dose of medication or driving habits so that they could continue to drive, which suggests that the warning advice may not be having the desired impact. The findings of this research have implications for current countermeasures in this area. These have included enhancing the role that prescribing doctors have in providing warnings and advice to patients about the impact that their medication can have on driving, increasing consumer perceptions of the authority of pharmacists on this issue, and the reinforcement of the warning message. More broadly, it is suggested that there would be benefit in a wider dissemination of research-based information on increased crash risk and systematic monitoring and publicity about the representation of medications in crashes resulting in injuries and fatalities. Suggestions for future research concern the continued investigation of the effects of medications and interactions with existing medical conditions and other substances on driving skills, effects of variations in warning label design, individual behaviours and characteristics (particularly among those groups who are dependent upon prescription medication) and validation of consumer self-assessment of impairment.
Resumo:
This chapter presents a pilot study examining the interactive contributions of executive function development/impairment and psychosocial stress to young adults’ (17-30 years old) driving behaviour in a simulator city scenario.
Resumo:
The medical records of 273 patients 75 years and older were reviewed to evaluate quality of emergency department (ED) care through the use of quality indicators. One hundred fifty records contained evidence of an attempt to carry out a cognitive assessment. Documented evidence of cognitive impairment (CI) was reported in 54 cases. Of these patients, 30 had no documented evidence of an acute change in cognitive function from baseline; of 26 patients discharged home with preexisting CI (i.e., no acute change from baseline), 15 had no documented evidence of previous consideration of this issue by a health care provider; and 12 of 21 discharged patients who screened positive for cognitive issues for the first time were not referred for outpatient evaluation. These findings suggest that the majority of older adults in the ED are not receiving a formal cognitive assessment, and more than half with CI do not receive quality of care according to the quality indicators for geriatric emergency care. Recommendations for improvement are discussed.
Resumo:
Purpose: To determine whether neuroretinal function differs in healthy persons with and without common risk gene variants for age- related macular degeneration (AMD) and no ophthalmoscopic signs of AMD, and to compare those findings in persons with manifest early AMD. Methods and Participants: Neuroretinal function was assessed with the multifocal electroretinogram (mfERG) (VERIS, Redwood City, CA,) in 32 participants (22 healthy persons with no clinical signs of AMD and 10 early AMD patients). The 22 healthy participants with no AMD were risk genotypes for either the CFH (rs380390) and/or ARMS2 (rs10490920). We used a slow flash mfERG paradigm (3 inserted frames) and a 103 hexagon stimulus array. Recordings were made with DTL electrodes; fixation and eye movements were monitored online. Trough N1 to peak P1 (N1P1) response densities and P1-implicit times (IT) were analysed in 5 concentric rings. Results: N1P1 response densities (mean ± SD) for concentric rings 1-3 were on average significantly higher in at-risk genotypes (ring 1: 17.97 nV/deg2 ± 1.9, ring 2: 11.7 nV/deg2 ±1.3, ring 3: 8.7 nV/deg2 ± 0.7) compared to those without risk (ring 1: 13.7 nV/deg2 ± 1.9, ring 2: 9.2 nV/deg2 ±0.8, ring 3: 7.3 nV/deg2 ± 1.1) and compared to persons with early AMD (ring 1: 15.3 nV/deg2 ± 4.8, ring 2: 9.1 nV/deg2 ±2.3, ring 3 nV/deg2: 7.3± 1.3) (p<0.5). The group implicit times, P1-ITs for ring 1 were on average delayed in the early AMD patients (36.4 ms ± 1.0) compared to healthy participants with (35.1 ms ± 1.1) or without risk genotypes (34.8 ms ±1.3), although these differences were not significant. Conclusion: Neuroretinal function in persons with normal fundi can be differentiated into subgroups based on their genetics. Increased neuroretinal activity in persons who carry AMD risk genotypes may be due to genetically determined subclinical inflammatory and/or histological changes in the retina. Assessment of neuroretinal function in healthy persons genetically susceptible to AMD may be a useful early biomarker before there is clinical manifestation of AMD.
Resumo:
This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.
Resumo:
The article presents a study which investigated the reasons why advice related to the removal of mats or rags by older people with visual impairments had a low rate of acceptance. The researchers speculated that it may have been due to older people's need to maintain a sense of control and autonomy and to arrange their environments in a way that they decided or a belief that the recommended modification would not reduce the risk of falling. A telephone survey of subsample of the participants was conducted in the Visually Impaired Persons (VIP) Trial. All 30 interviewees had rugs or mats in their homes. Of the 30 participants, 20 had moved the rugs or mats as a result of recommendations, and 10 had not.
Resumo:
Purpose: Changes in pupil size and shape are relevant for peripheral imagery by affecting aberrations and how much light enters and/or exits the eye. The purpose of this study is to model the pattern of pupil shape across the complete horizontal visual field and to show how the pattern is influenced by refractive error. Methods: Right eyes of thirty participants were dilated with 1% cyclopentolate and images were captured using a modified COAS-HD aberrometer alignment camera along the horizontal visual field to ±90°. A two lens relay system enabled fixation at targets mounted on the wall 3m from the eye. Participants placed their heads on a rotatable chin rest and eye rotations were kept to less than 30°. Best-fit elliptical dimensions of pupils were determined. Ratios of minimum to maximum axis diameters were plotted against visual field angle. Results: Participants’ data were well fitted by cosine functions, with maxima at (–)1° to (–)9° in the temporal visual field and widths 9% to 15% greater than predicted by the cosine of the field angle . Mean functions were 0.99cos[( + 5.3)/1.121], R2 0.99 for the whole group and 0.99cos[( + 6.2)/1.126], R2 0.99 for the 13 emmetropes. The function peak became less temporal, and the width became smaller, with increase in myopia. Conclusion: Off-axis pupil shape changes are well described by a cosine function which is both decentered by a few degrees and flatter by about 12% than the cosine of the viewing angle, with minor influences of refraction.
Resumo:
This paper provides a preliminary analysis of an autonomous uncooperative collision avoidance strategy for unmanned aircraft using image-based visual control. Assuming target detection, the approach consists of three parts. First, a novel decision strategy is used to determine appropriate reference image features to track for safe avoidance. This is achieved by considering the current rules of the air (regulations), the properties of spiral motion and the expected visual tracking errors. Second, a spherical visual predictive control (VPC) scheme is used to guide the aircraft along a safe spiral-like trajectory about the object. Lastly, a stopping decision based on thresholding a cost function is used to determine when to stop the avoidance behaviour. The approach does not require estimation of range or time to collision, and instead relies on tuning two mutually exclusive decision thresholds to ensure satisfactory performance.
Resumo:
Converging evidence from epidemiological, clinical and neuropsychological research suggests a link between cannabis use and increased risk of psychosis. Long-term cannabis use has also been related to deficit-like “negative” symptoms and cognitive impairment that resemble some of the clinical and cognitive features of schizophrenia. The current functional brain imaging study investigated the impact of a history of heavy cannabis use on impaired executive function in first-episode schizophrenia patients. Whilst performing the Tower of London task in a magnetic resonance imaging scanner, event-related blood oxygenation level-dependent (BOLD) brain activation was compared between four age and gender-matched groups: 12 first-episode schizophrenia patients; 17 long-term cannabis users; seven cannabis using first-episode schizophrenia patients; and 17 healthy control subjects. BOLD activation was assessed as a function of increasing task difficulty within and between groups as well as the main effects of cannabis use and the diagnosis of schizophrenia. Cannabis users and non-drug using first-episode schizophrenia patients exhibited equivalently reduced dorsolateral prefrontal activation in response to task difficulty. A trend towards additional prefrontal and left superior parietal cortical activation deficits was observed in cannabis-using first-episode schizophrenia patients while a history of cannabis use accounted for increased activation in the visual cortex. Cannabis users and schizophrenia patients fail to adequately activate the dorsolateral prefrontal cortex, thus pointing to a common working memory impairment which is particularly evident in cannabis-using first-episode schizophrenia patients. A history of heavy cannabis use, on the other hand, accounted for increased primary visual processing, suggesting compensatory imagery processing of the task.