983 resultados para 87-582B
Resumo:
The shift from 20th century mass communications media towards convergent media and Web 2.0 has raised the possibility of a renaissance of the public sphere, based around citizen journalism and participatory media culture. This paper will evaluate such claims both conceptually and empirically. At a conceptual level, it is noted that the question of whether media democratization is occurring depends in part upon how democracy is understood, with some critical differences in understandings of democracy, the public sphere and media citizenship. The empirical work in this paper draws upon various case studies of new developments in Australian media, including online- only newspapers, developments in public service media, and the rise of commercially based online alternative media. It is argued that participatory media culture is being expanded if understood in terms of media pluralism, but that implications for the public sphere depend in part upon how media democratization is defined.
Resumo:
The shift from 20th century mass communications media towards convergent media and Web 2.0 has raised the possibility of a renaissance of the public sphere, based around citizen journalism and participatory media culture. This paper will evaluate such claims both conceptually and empirically. At a conceptual level, it is noted that the question of whether media democratization is occurring depends in part upon how democracy is understood, with some critical differences in understandings of democracy, the public sphere and media citizenship. The empirical work in this paper draws upon various case studies of new developments in Australian media, including online-only newspapers, developments in public service media, and the rise of commercially based online alternative media. It is argued that participatory media culture is being expanded if understood in terms of media pluralism, but that implications for the public sphere depend in part upon how media democratization is defined.
Resumo:
OBJECTIVE: To examine whether some drivers with hemianopia or quadrantanopia display safe driving skills on the road compared with drivers with normal visual fields. ---------- METHOD: An occupational therapist evaluated 22 people with hemianopia, 8 with quadrantanopia, and 30 with normal vision for driving skills during naturalistic driving using six rating scales. ---------- RESULTS: Of drivers with normal vision, >90% drove flawlessly or had minor errors. Although drivers with hemianopia were more likely to receive poorer ratings for all skills, 59.1%–81.8% performed with no or minor errors. A skill commonly problematic for them was lane keeping (40.9%). Of 8 drivers with quadrantanopia, 7 (87.5%) exhibited no or minor errors. ---------- CONCLUSION: This study of people with hemianopia or quadrantanopia with no lateral spatial neglect highlights the need to provide individual opportunities for on-road driving evaluation under natural traffic conditions if a person is motivated to return to driving after brain injury.
Resumo:
PURPOSE: This study investigated the effects of simulated visual impairment on nighttime driving performance and pedestrian recognition under real-road conditions. METHODS: Closed road nighttime driving performance was measured for 20 young visually normal participants (M = 27.5 +/- 6.1 years) under three visual conditions: normal vision, simulated cataracts, and refractive blur that were incorporated in modified goggles. The visual acuity levels for the cataract and blur conditions were matched for each participant. Driving measures included sign recognition, avoidance of low contrast road hazards, time to complete the course, and lane keeping. Pedestrian recognition was measured for pedestrians wearing either black clothing or black clothing with retroreflective markings on the moveable joints to create the perception of biological motion ("biomotion"). RESULTS: Simulated visual impairment significantly reduced participants' ability to recognize road signs, avoid road hazards, and increased the time taken to complete the driving course (p < 0.05); the effect was greatest for the cataract condition, even though the cataract and blur conditions were matched for visual acuity. Although visual impairment also significantly reduced the ability to recognize the pedestrian wearing black clothing, the pedestrian wearing "biomotion" was seen 80% of the time. CONCLUSIONS: Driving performance under nighttime conditions was significantly degraded by modest visual impairment; these effects were greatest for the cataract condition. Pedestrian recognition was greatly enhanced by marking limb joints in the pattern of "biomotion," which was relatively robust to the effects of visual impairment.
Resumo:
Purpose. To investigate evidence-based visual field size criteria for referral of low-vision (LV) patients for mobility rehabilitation. Methods. One hundred and nine participants with LV and 41 age-matched participants with normal sight (NS) were recruited. The LV group was heterogeneous with diverse causes of visual impairment. We measured binocular kinetic visual fields with the Humphrey Field Analyzer and mobility performance on an obstacle-rich, indoor course. Mobility was assessed as percent preferred walking speed (PPWS) and number of obstacle-contact errors. The weighted kappa coefficient of association (κr) was used to discriminate LV participants with both unsafe and inefficient mobility from those with adequate mobility on the basis of their visual field size for the full sample and for subgroups according to type of visual field loss and whether or not the participants had previously received orientation and mobility training. Results. LV participants with both PPWS <38% and errors >6 on our course were classified as having inadequate (inefficient and unsafe) mobility compared with NS participants. Mobility appeared to be first compromised when the visual field was less than about 1.2 steradians (sr; solid angle of a circular visual field of about 70° diameter). Visual fields <0.23 and 0.63 sr (31 to 52° diameter) discriminated patients with at-risk mobility for the full sample and across the two subgroups. A visual field of 0.05 sr (15° diameter) discriminated those with critical mobility. Conclusions. Our study suggests that: practitioners should be alert to potential mobility difficulties when the visual field is less than about 1.2 sr (70° diameter); assessment for mobility rehabilitation may be warranted when the visual field is constricted to about 0.23 to 0.63 sr (31 to 52° diameter) depending on the nature of their visual field loss and previous history (at risk); and mobility rehabilitation should be conducted before the visual field is constricted to 0.05 sr (15° diameter; critical).
Resumo:
Objective: To investigate how age-related declines in vision (particularly contrast sensitivity), simulated using cataract-goggles and low-contrast stimuli, influence the accuracy and speed of cognitive test performance in older adults. An additional aim was to investigate whether declines in vision differentially affect secondary more than primary memory. Method: Using a fully within-subjects design, 50 older drivers aged 66-87 years completed two tests of cognitive performance - letter matching (perceptual speed) and symbol recall (short-term memory) - under different viewing conditions that degraded visual input (low-contrast stimuli, cataract-goggles, and low-contrast stimuli combined with cataract-goggles, compared with normal viewing). However, presentation time was also manipulated for letter matching. Visual function, as measured using standard charts, was taken into account in statistical analyses. Results: Accuracy and speed for cognitive tasks were significantly impaired when visual input was degraded. Furthermore, cognitive performance was positively associated with contrast sensitivity. Presentation time did not influence cognitive performance, and visual gradation did not differentially influence primary and secondary memory. Conclusion: Age-related declines in visual function can impact on the accuracy and speed of cognitive performance, and therefore the cognitive abilities of older adults may be underestimated in neuropsychological testing. It is thus critical that visual function be assessed prior to testing, and that stimuli be adapted to older adults' sensory capabilities (e.g., by maximising stimuli contrast).
Resumo:
PURPOSE: To investigate the impact of different levels of simulated visual impairment on the cognitive test performance of older adults and to compare this with previous findings in younger adults. METHODS.: Cognitive performance was assessed in 30 visually normal, community-dwelling older adults (mean = 70.2 ± 3.9 years). Four standard cognitive tests were used including the Digit Symbol Substitution Test, Trail Making Tests A and B, and the Stroop Color Word Test under three visual conditions: normal baseline vision and two levels of cataract simulating filters (Vistech), which were administered in a random order. Distance high-contrast visual acuity and Pelli-Robson letter contrast sensitivity were also assessed for all three visual conditions. RESULTS.: Simulated cataract significantly impaired performance across all cognitive test performance measures. In addition, the impact of simulated cataract was significantly greater in this older cohort than in a younger cohort previously investigated. Individual differences in contrast sensitivity better predicted cognitive test performance than did visual acuity. CONCLUSIONS.: Visual impairment can lead to slowing of cognitive performance in older adults; these effects are greater than those observed in younger participants. This has important implications for neuropsychological testing of older populations who have a high prevalence of cataract.
Resumo:
Purpose: The aim was to determine world-wide patterns of fitting contact lenses for the correction of presbyopia. Methods: Up to 1,000 survey forms were sent to contact lens fitters in each of 38 countries between January and March every year over five consecutive years (2005 to 2009). Practitioners were asked to record data relating to the first 10 contact lens fittings or refittings performed after receiving the survey form. Results: Data were received relating to 16,680 presbyopic (age 45 years or older) and 84,202 pre-presbyopic (15 to 44 years) contact lens wearers. Females are over-represented in presbyopic versus pre-presbyopic groups, possibly reflecting a stronger desire for the cosmetic benefits of contact lenses among older women. The extent to which multifocal and monovision lenses are prescribed for presbyopes varies considerably among nations, ranging from 79 per cent of all soft lenses in Portugal to zero in Singapore. There appears to be significant under-prescribing of contact lenses for the correction of presbyopia, although for those who do receive such corrections, three times more multifocal lenses are fitted compared with monovision fittings. Presbyopic corrections are most frequently prescribed for full-time wear and monthly replacement. Conclusions: Despite apparent improvements in multifocal design and an increase in available multifocal options in recent years, practitioners are still under-prescribing with respect to the provision of appropriate contact lenses for the correction of presbyopia. Training of contact lens practitioners in presbyopic contact lens fitting should be accelerated and clinical and laboratory research in this field should be intensified to enhance the prospects of meeting the needs of presbyopic contact lens wearers more fully.
Resumo:
Purpose. The objective of this study was to explore the discriminative capacity of non-contact corneal esthesiometry (NCCE) when compared with the neuropathy disability score (NDS) score—a validated, standard method of diagnosing clinically significant diabetic neuropathy. Methods. Eighty-one participants with type 2 diabetes, no history of ocular disease, trauma, or surgery and no history of systemic disease that may affect the cornea were enrolled. Participants were ineligible if there was history of neuropathy due to non-diabetic cause or current diabetic foot ulcer or infection. Corneal sensitivity threshold was measured on the eye of dominant hand side at a distance of 10 mm from the center of the cornea using a stimulus duration of 0.9 s. The NDS was measured producing a score ranging from 0 to 10. To determine the optimal cutoff point of corneal sensitivity that identified the presence of neuropathy (diagnosed by NDS), the Youden index and “closest-to-(0,1)” criteria were used. Results. The receiver-operator characteristic curve for NCCE for the presence of neuropathy (NDS ≥3) had an area under the curve of 0.73 (p = 0.001) and, for the presence of moderate neuropathy (NDS ≥6), area of 0.71 (p = 0.003). By using the Youden index, for an NDS ≥3, the sensitivity of NCCE was 70% and specificity was 75%, and a corneal sensitivity threshold of 0.66 mbar or higher indicated the presence of neuropathy. When NDS ≥6 (indicating risk of foot ulceration) was applied, the sensitivity was 52% with a specificity of 85%. Conclusions. NCCE is a sensitive test for the diagnosis of minimal and more advanced diabetic neuropathy and may serve as a useful surrogate marker for diabetic and perhaps other neuropathies.
Resumo:
Rationale, aims and objectives: Patient preference for interventions aimed at preventing in-hospital falls has not previously been investigated. This study aims to contrast the amount patients are willing to pay to prevent falls through six intervention approaches. ----- ----- Methods: This was a cross-sectional willingness-to-pay (WTP), contingent valuation survey conducted among hospital inpatients (n = 125) during their first week on a geriatric rehabilitation unit in Queensland, Australia. Contingent valuation scenarios were constructed for six falls prevention interventions: a falls consultation, an exercise programme, a face-to-face education programme, a booklet and video education programme, hip protectors and a targeted, multifactorial intervention programme. The benefit to participants in terms of reduction in risk of falls was held constant (30% risk reduction) within each scenario. ----- ----- Results: Participants valued the targeted, multifactorial intervention programme the highest [mean WTP (95% CI): $(AUD)268 ($240, $296)], followed by the falls consultation [$215 ($196, $234)], exercise [$174 ($156, $191)], face-to-face education [$164 ($146, $182)], hip protector [$74 ($62, $87)] and booklet and video education interventions [$68 ($57, $80)]. A ‘cost of provision’ bias was identified, which adversely affected the valuation of the booklet and video education intervention. ----- ----- Conclusion: There may be considerable indirect and intangible costs associated with interventions to prevent falls in hospitals that can substantially affect patient preferences. These costs could substantially influence the ability of these interventions to generate a net benefit in a cost–benefit analysis.
Resumo:
Purpose: To date, there have been no measuring techniques available that could clearly identify all phases of tear film surface kinetics in one interblink interval. ----- ----- Methods: Using a series of cases, we show that lateral shearing interferometry equipped with a set of robust parameter estimation techniques is able to characterize up to five different phases of tear film surface kinetics that include: (i) initial fast tear film build-up phase, (ii) further slower tear film build-up phase, (iii) tear film stability, (iv) tear film thinning, and (v), after a detected break-up, subsequent tear film deterioration. ----- ----- Results: Several representative examples are given for estimating tear film surface kinetics in measurements in which the subjects were asked to blink and keep their eyes open as long as they could. ----- ----- Conclusions: Lateral shearing interferometry is a noninvasive technique that provides means for temporal characterization of tear film surface kinetics and the opportunity for the analysis of the two-step tear film build-up process.
Resumo:
Increased crash risk is associated with sedative medications and researchers and health-professionals have called for improvements to medication warnings about driving. The tiered warning system in France since 2005 indicates risk level, uses a color-coded pictogram, and advises the user to seek the advice of a doctor before driving. In Queensland, Australia, the mandatory warning on medications that may cause drowsiness advises the user not to drive or operate machinery if they self-assess that they are affected, and calls attention to possible increased impairment when combined with alcohol. Objectives The reported aims of the study were to establish and compare risk perceptions associated with the Queensland and French warnings among medication users. It was conducted to complement the work of DRUID in reviewing the effectiveness of existing campaigns and practice guidelines. Methods Medication users in France and Queensland were surveyed using warnings about driving from both contexts to compare risk perceptions associated with each label. Both samples were assessed for perceptions of the warning that carried the strongest message of risk. The Queensland study also included perceptions of the likelihood of crash and level of impairment associated with the warning. Results Findings from the French study (N = 75) indicate that when all labels were compared, the majority of respondents perceived the French Level-3 label as the strongest warning about risk concerning driving. Respondents in Queensland had significantly stronger perceptions of potential impairment to driving ability, z = -13.26, p <.000 (n = 325), and potential chance of having a crash, z = -11.87, p < .000 (n = 322), after taking a medication that displayed the strongest French warning, compared with the strongest Queensland warning. Conclusions Evidence suggests that warnings about driving displayed on medications can influence risk perceptions associated with use of medication. Further analyses will determine whether risk perceptions influence compliance with the warnings.
Resumo:
Special collections, because of the issues associated with conservation and use, a feature they share with archives, tend to be the most digitized areas in libraries. The Nineteenth Century Schoolbooks collection is a collection of 9000 rarely held nineteenth-century schoolbooks that were painstakingly collected over a lifetime of work by Prof. John A. Nietz, and donated to the Hillman Library at the University of Pittsburgh in 1958, which has since grown to 15,000. About 140 of these texts are completely digitized and showcased in a publicly accessible website through the University of Pittsburgh’s Library, along with a searchable bibliography of the entire collection, which expanded the awareness of this collection and its user base to beyond the academic community. The URL for the website is http://digital.library.pitt.edu/nietz/. The collection is a rich resource for researchers studying the intellectual, educational, and textbook publishing history of the United States. In this study, we examined several existing records collected by the Digital Research Library at the University of Pittsburgh in order to determine the identity and searching behaviors of the users of this collection. Some of the records examined include: 1) The results of a 3-month long user survey, 2) User access statistics including search queries for a period of one year, a year after the digitized collection became publicly available in 2001, and 3) E-mail input received by the website over 4 years from 2000-2004. The results of the study demonstrate the differences in online retrieval strategies used by academic researchers and historians, archivists, avocationists, and the general public, and the importance of facilitating the discovery of digitized special collections through the use of electronic finding aids and an interactive interface with detailed metadata.
Resumo:
Background: It remains unclear whether it is possible to develop a spatiotemporal epidemic prediction model for cryptosporidiosis disease. This paper examined the impact of social economic and weather factors on cryptosporidiosis and explored the possibility of developing such a model using social economic and weather data in Queensland, Australia. ----- ----- Methods: Data on weather variables, notified cryptosporidiosis cases and social economic factors in Queensland were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Three-stage spatiotemporal classification and regression tree (CART) models were developed to examine the association between social economic and weather factors and monthly incidence of cryptosporidiosis in Queensland, Australia. The spatiotemporal CART model was used for predicting the outbreak of cryptosporidiosis in Queensland, Australia. ----- ----- Results: The results of the classification tree model (with incidence rates defined as binary presence/absence) showed that there was an 87% chance of an occurrence of cryptosporidiosis in a local government area (LGA) if the socio-economic index for the area (SEIFA) exceeded 1021, while the results of regression tree model (based on non-zero incidence rates) show when SEIFA was between 892 and 945, and temperature exceeded 32°C, the relative risk (RR) of cryptosporidiosis was 3.9 (mean morbidity: 390.6/100,000, standard deviation (SD): 310.5), compared to monthly average incidence of cryptosporidiosis. When SEIFA was less than 892 the RR of cryptosporidiosis was 4.3 (mean morbidity: 426.8/100,000, SD: 319.2). A prediction map for the cryptosporidiosis outbreak was made according to the outputs of spatiotemporal CART models. ----- ----- Conclusions: The results of this study suggest that spatiotemporal CART models based on social economic and weather variables can be used for predicting the outbreak of cryptosporidiosis in Queensland, Australia.
Resumo:
In a randomized, double-blind study, 202 healthy adults were randomized to receive a live, attenuated Japanese encephalitis chimeric virus vaccine (JE-CV) and placebo 28 days apart in a cross-over design. A subgroup of 98 volunteers received a JE-CV booster at month 6. Safety, immunogenicity, and persistence of antibodies to month 60 were evaluated. There were no unexpected adverse events (AEs) and the incidence of AEs between JE-CV and placebo were similar. There were three serious adverse events (SAE) and no deaths. A moderately severe case of acute viral illness commencing 39 days after placebo administration was the only SAE considered possibly related to immunization. 99% of vaccine recipients achieved a seroprotective antibody titer ≥ 10 to JE-CV 28 days following the single dose of JE-CV, and 97% were seroprotected at month 6. Kaplan Meier analysis showed that after a single dose of JE-CV, 87% of the participants who were seroprotected at month 6 were still protected at month 60. This rate was 96% among those who received a booster immunization at month 6. 95% of subjects developed a neutralizing titer ≥ 10 against at least three of the four strains of a panel of wild-type Japanese encephalitis virus (JEV) strains on day 28 after immunization. At month 60, that proportion was 65% for participants who received a single dose of JE-CV and 75% for the booster group. These results suggest that JE-CV is safe, well tolerated and that a single dose provides long-lasting immunity to wild-type strains