18 resultados para Prosthesis Evaluation Questionnaire
Resumo:
Background: The MacDQoL is an individualised measure of the impact of macular degeneration (MD) on quality of life (QoL). There is preliminary evidence of its psychometric properties and sensitivity to severity of MD. The aim of this study was to carry out further psychometric evaluation with a larger sample and investigate the measure's sensitivity to MD severity. Methods: Patients with MD (n = 156: 99 women, 57 men, mean age 79 ± 13 years), recruited from eye clinics (one NHS, one private) completed the MacDQoL by telephone interview and later underwent a clinic vision assessment including near and distance visual acuity (VA), comfortable near VA, contrast sensitivity, colour recognition, recovery from glare and presence or absence of distortion or scotoma in the central 10° of the visual field. Results: The completion rate for the MacDQoL items was 99.8%. Of the 26 items, three were dropped from the measure due to redundancy. A fourth was retained in the questionnaire but excluded when computing the scale score. Principal components analysis and Cronbach's alpha (0.944) supported combining the remaining 22 items in a single scale. Lower MacDQoL scores, indicating more negative impact of MD on QoL, were associated with poorer distance VA (better eye r = -0.431 p < 0.001; worse eye r = -0.350 p < 0.001; binocular vision r = -0.419 p < 0.001) and near VA (better eye r -0.326 p < 0.001; worse eye r = -0.226 p < 0.001; binocular vision r = -0.326 p < 0.001). Poorer MacDQoL scores were associated with poorer contrast sensitivity (better eye r = 0.392 p < 0.001; binocular vision r = 0.423 p < 0.001), poorer colour recognition (r = 0.417 p < 0.001) and poorer comfortable near VA (r = -0.283, p < 0.001). The MacDQoL differentiated between those with and without binocular scotoma (U = 1244 p < 0.001). Conclusion: The MacDQoL 22-item scale has excellent internal consistency reliability and a single-factor structure. The measure is acceptable to respondents and the generic QoL item, MD-specific QoL item and average weighted impact score are related to several measures of vision. The MacDQoL demonstrates that MD has considerable negative impact on many aspects of QoL, particularly independence, leisure activities, dealing with personal affairs and mobility. The measure may be valuable for use in clinical trials and routine clinical care. © 2005 Mitchell et al; licensee BioMed Central Ltd.
Resumo:
Purpose To develop a standardized questionnaire of near visual function and satisfaction to complement visual function evaluations of presbyopic corrections. Setting Eye Clinic, School of Life and Health Sciences, Aston University, Midland Eye Institute and Solihull Hospital, Birmingham, United Kingdom. Design Questionnaire development. Methods A preliminary 26-item questionnaire of previously used near visual function items was completed by patients with monofocal intraocular lenses (IOLs), multifocal IOLs, accommodating IOLs, multifocal contact lenses, or varifocal spectacles. Rasch analysis was used for item reduction, after which internal and test–retest reliabilities were determined. Construct validity was determined by correlating the resulting Near Activity Visual Questionnaire (NAVQ) scores with near visual acuity and critical print size (CPS), which was measured using the Minnesota Low Vision Reading Test chart. Discrimination ability was assessed through receiver-operating characteristic (ROC) curve analysis. Results One hundred fifty patients completed the questionnaire. Item reduction resulted in a 10-item NAVQ with excellent separation (2.92), internal consistency (Cronbach a = 0.95), and test–retest reliability (intraclass correlation coefficient = 0.72). Correlations of questionnaire scores with near visual acuity (r = 0.32) and CPS (r = 0.27) provided evidence of validity, and discrimination ability was excellent (area under ROC curve = 0.91). Conclusion Results show the NAVQ is a reliable, valid instrument that can be incorporated into the evaluation of presbyopic corrections.
Resumo:
Objectives: To develop an objective measure to enable hospital Trusts to compare their use of antibiotics. Design: Self-completion, postal questionnaire with telephone follow up. Sample: 4 hospital trusts in the English Midlands. Results: The survey showed that it was possible to collect data concerning the number of Defined Daily Doses (DDD's) of quinolone antibiotic dispensed per Finished Consultant Episode (FCE) in each Trust.. In the 4 trusts studied the mean DDD/FCE was 0.197 (range 0.117 to 0.258). This indicates that based on a typical course length of 5 days, 3.9% of patient episodes resulted in the prescription of a quinolone antibiotic. Antibiotic prescribing control measures in each Trust were found to be comparable. Conclusion: The measure will enable Trusts to objectively compare their usage of quinolone antibiotics and use this information to carry out clinical audit should differences be recorded. This is likely to be applicable to other groups of antibiotics.
Resumo:
This report details an evaluation of the My Choice Weight Management Programme undertaken by a research team from the School of Pharmacy at Aston University. The My Choice Weight Management Programme is delivered through community pharmacies and general practitioners (GPs) contracted to provide services by the Heart of Birmingham teaching Primary Care Trust. It is designed to support individuals who are ‘ready to change’ by enabling the individual to work with a trained healthcare worker (for example, a healthcare assistant, practice nurse or pharmacy assistant) to develop a care plan designed to enable the individual to lose 5-10% of their current weight. The Programme aims to reduce adult obesity levels; improve access to overweight and obesity management services in primary care; improve diet and nutrition; promote healthy weight and increased levels of physical activity in overweight or obese patients; and support patients to make lifestyle changes to enable them to lose weight. The Programme is available for obese patients over 18 years old who have a Body Mass Index (BMI) greater than 30 kg/m2 (greater than 25 kg/m2 in Asian patients) or greater than 28 kg/m2 (greater than 23.5 kg/m2 in Asian patients) in patients with co-morbidities (diabetes, high blood pressure, cardiovascular disease). Each participant attends weekly consultations over a twelve session period (the final iteration of these weekly sessions is referred to as ‘session twelve’ in this report). They are then offered up to three follow up appointments for up to six months at two monthly intervals (the final of these follow ups, taking place at approximately nine months post recruitment, is referred to as ‘session fifteen’ in this report). A review of the literature highlights the dearth of published research on the effectiveness of primary care- or community-based weight management interventions. This report may help to address this knowledge deficit. A total of 451 individuals were recruited on to the My Choice Weight Management Programme. More participants were recruited at GP surgeries (n=268) than at community pharmacies (n=183). In total, 204 participants (GP n=102; pharmacy n=102) attended session twelve and 82 participants (GP n=22; pharmacy 60) attended session fifteen. The unique demographic characteristics of My Choice Weight Management Programme participants – participants were recruited from areas with high levels of socioeconomic deprivation and over four-fifths of participants were from Black and Minority Ethnic groups; populations which are traditionally underserved by healthcare interventions – make the achievements of the Programme particularly notable. The mean weight loss at session 12 was 3.8 kg (equivalent to a reduction of 4.0% of initial weight) among GP surgery participants and 2.4 kg (2.8%) among pharmacy participants. At session 15 mean weight loss was 2.3 kg (2.2%) among GP surgery participants and 3.4 kg (4.0%) among pharmacy participants. The My Choice Weight Management Programme improved the general health status of participants between recruitment and session twelve as measured by the validated SF-12 questionnaire. While cost data is presented in this report, it is unclear which provider type delivered the Programme more cost-effectively. Attendance rates on the Programme were consistently better among pharmacy participants than among GP participants. The opinions of programme participants (both those who attended regularly and those who failed to attend as expected) and programme providers were explored via semi-structured interviews and, in the case of the participants, a selfcompletion postal questionnaire. These data suggest that the Programme was almost uniformly popular with both the deliverers of the Programme and participants on the Programme with 83% of questionnaire respondents indicating that they would be happy to recommend the Programme to other people looking to lose weight. Our recommendations, based on the evidence provided in this report, include: a. Any consideration of an extension to the study also giving comparable consideration to an extension of the Programme evaluation. The feasibility of assigning participants to a pharmacy provider or a GP provider via a central allocation system should also be examined. This would address imbalances in participant recruitment levels between provider type and allow for more accurate comparison of the effectiveness in the delivery of the Programme between GP surgeries and community pharmacies by increasing the homogeneity of participants at each type of site and increasing the number of Programme participants overall. b. Widespread dissemination of the findings from this review of the My Choice Weight Management Project should be undertaken through a variety of channels. c. Consideration of the inclusion of the following key aspects of the My Choice Weight Management Project in any extension to the Programme: i. The provision of training to staff in GP surgeries and community pharmacies responsible for delivery of the Programme prior to patient recruitment. ii. Maintaining the level of healthcare staff input to the Programme. iii. The regular schedule of appointments with Programme participants. iv. The provision of an increased variety of printed material. d. A simplification of the data collection method used by the Programme commissioners at the individual Programme delivery sites.
Resumo:
There has been substantial research into the role of distance learning in education. Despite the rise in the popularity and practice of this form of learning in business, there has not been a parallel increase in the amount of research carried out in this field. An extensive investigation was conducted into the entire distance learning system of a multi-national company with particular emphasis on the design, implementation and evaluation of the materials. In addition, the performance and attitudes of trainees were examined. The results of a comparative study indicated that trainees using distance learning had significantly higher test scores than trainees using conventional face-to-face training. The influence of the previous distance learning experience, educational background and selected study environment of trainees was investigated. Trainees with previous experience of distance learning were more likely to complete the course and with significantly higher test scores than trainees with no previous experience. The more advanced the educational background of trainees, the greater the likelihood of their completing the course, although there was no significant difference in the test scores achieved. Trainees preferred to use the materials at home and those opting to study in this environment scored significantly higher than those studying in the office, the study room at work or in a combination of environments. The influence of learning styles (Kolb, 1976) was tested. The results indicated that the convergers had the greatest completion rates and scored significantly higher than trainees with the assimilator, accommodator and diverger learning styles. The attitudes of the trainees, supervisors and trainers were examined using questionnaire, interview and discussion techniques. The findings highlighted the potential problems of lack of awareness and low motivation which could prove to be major obstacles to the success of distance learning in business.
Herbal medicines:physician's recommendation and clinical evaluation of St.John's Wort for depression
Resumo:
Why some physicians recommend herbal medicines while others do not is not well understood. We undertook a survey designed to identify factors, which predict recommendation of herbal medicines by physicians in Malaysia. About a third (206 out of 626) of the physicians working at the University of Malaya Medical Centre ' were interviewed face-to-face, using a structured questionnaire. Physicians were asked about their personal use of, recommendation of, perceived interest in and, usefulness and safety of herbal medicines. Using logistic regression modelling we identified personal use, general interest, interest in receiving training, race and higher level of medical training as significant predictors of recommendation. St. John's wort is one of the most widely used herbal remedies. It is also probably the most widely evaluated herbal remedy with no fewer than 57 randomised controlled trials. Evidence from the depression trials suggests that St. John's wort is more effective than placebo while its comparative efficacy to conventional antidepressants is not well established. We updated previous meta-analyses of St. John's wort, described the characteristics of the included trials, applied methods of data imputation and transformation for incomplete trial data and examined sources of heterogeneity in the design and results of those trials. Thirty randomised controlled trials, which were heterogeneous in design, were identified. Our meta-analysis showed that St. John's wort was significantly more effective than placebo [pooled RR 1.90 (1.54-2.35)] and [Pooled WMD 4.09 (2.33 to 5.84)]. However, the remedy was similar to conventional antidepressant in its efficacy [Pooled RR I. 0 I (0.93 -1.10)] and [Pooled WMD 0.18 (- 0.66 to 1.02). Subgroup analyses of the placebo-controlled trials suggested that use of different diagnostic classifications at the inclusion stage led to different estimates of effect. Similarly a significant difference in the estimates of efficacy was observed when trials were categorised according to length of follow-up. Confounding between the variables, diagnostic classification and length of trial was shown by loglinear analysis. Despite extensive study, there is still no consensus on how effective St. lohn's wort is in depression. However, most experts would agree that it has some effect. Our meta-analysis highlights the problems associated with the clinical evaluation of herbal medicines when the active ingredients are poorly defined or unknown. The problem is compounded when the target disease (e.g. depression) is also difficult to define and different instruments are available to diagnose and evaluate it.
Resumo:
AIM: To determine the validity and reliability of the measurement of corneal curvature and non-invasive tear break-up time (NITBUT) measures using the Oculus Keratograph. METHOD: One hundred eyes of 100 patients had their corneal curvature assessed with the Keratograph and the Nidek ARKT TonorefII. NITBUT was then measured objectively with the Keratograph with Tear Film Scan software and subjectively with the Keeler Tearscope. The Keratograph measurements of corneal curvature and NITBUT were repeated to test reliability. The ocular sensitivity disease index questionnaire was completed to quantify ocular comfort. RESULTS: The Keratograph consistently measured significantly flatter corneal curvatures than the ARKT (MSE difference: +1.83±0.44D), but was repeatable (p>0.05). Keratograph NITBUT measurements were significantly lower than observation using the Tearscope (by 12.35±7.45s; pp < 0.001) and decreased on subsequent measurement (by -1.64 ± 6.03s; p < 0.01). The Keratograph measures the first time the tears break up anywhere on the cornea with 63% of subjects having NI-TBUT's <5s and a further 22% having readings between 5 and 10s. The Tearscope results were found to correlate better with the patients symptoms (r = -0.32) compared to the Keratograph (r = -0.19). Conclusions: The Keratograph requires a calibration off-set to be comparable to other keratometry devices. Its current software detects very early tear film changes, recording significantly lower NITBUT values than conventional subjective assessment. Adjustments to instrumentation software have the potential to enhance the value of Keratograph objective measures in clinical practice.
Resumo:
OBJECTIVE: To determine the accuracy, acceptability and cost-effectiveness of polymerase chain reaction (PCR) and optical immunoassay (OIA) rapid tests for maternal group B streptococcal (GBS) colonisation at labour. DESIGN: A test accuracy study was used to determine the accuracy of rapid tests for GBS colonisation of women in labour. Acceptability of testing to participants was evaluated through a questionnaire administered after delivery, and acceptability to staff through focus groups. A decision-analytic model was constructed to assess the cost-effectiveness of various screening strategies. SETTING: Two large obstetric units in the UK. PARTICIPANTS: Women booked for delivery at the participating units other than those electing for a Caesarean delivery. INTERVENTIONS: Vaginal and rectal swabs were obtained at the onset of labour and the results of vaginal and rectal PCR and OIA (index) tests were compared with the reference standard of enriched culture of combined vaginal and rectal swabs. MAIN OUTCOME MEASURES: The accuracy of the index tests, the relative accuracies of tests on vaginal and rectal swabs and whether test accuracy varied according to the presence or absence of maternal risk factors. RESULTS: PCR was significantly more accurate than OIA for the detection of maternal GBS colonisation. Combined vaginal or rectal swab index tests were more sensitive than either test considered individually [combined swab sensitivity for PCR 84% (95% CI 79-88%); vaginal swab 58% (52-64%); rectal swab 71% (66-76%)]. The highest sensitivity for PCR came at the cost of lower specificity [combined specificity 87% (95% CI 85-89%); vaginal swab 92% (90-94%); rectal swab 92% (90-93%)]. The sensitivity and specificity of rapid tests varied according to the presence or absence of maternal risk factors, but not consistently. PCR results were determinants of neonatal GBS colonisation, but maternal risk factors were not. Overall levels of acceptability for rapid testing amongst participants were high. Vaginal swabs were more acceptable than rectal swabs. South Asian women were least likely to have participated in the study and were less happy with the sampling procedure and with the prospect of rapid testing as part of routine care. Midwives were generally positive towards rapid testing but had concerns that it might lead to overtreatment and unnecessary interference in births. Modelling analysis revealed that the most cost-effective strategy was to provide routine intravenous antibiotic prophylaxis (IAP) to all women without screening. Removing this strategy, which is unlikely to be acceptable to most women and midwives, resulted in screening, based on a culture test at 35-37 weeks' gestation, with the provision of antibiotics to all women who screened positive being most cost-effective, assuming that all women in premature labour would receive IAP. The results were sensitive to very small increases in costs and changes in other assumptions. Screening using a rapid test was not cost-effective based on its current sensitivity, specificity and cost. CONCLUSIONS: Neither rapid test was sufficiently accurate to recommend it for routine use in clinical practice. IAP directed by screening with enriched culture at 35-37 weeks' gestation is likely to be the most acceptable cost-effective strategy, although it is premature to suggest the implementation of this strategy at present.
Resumo:
Background: Introducing neonatal screening procedures may not be readily accepted by parents and may increase anxiety. The acceptability of pulse oximetry screening to parents has not been previously reported. Objective: To assess maternal acceptability of pulse oximetry screening for congenital heart defects and to identify factors predictive of participation in screening. Design and setting: A questionnaire was completed by a cross-sectional sample of mothers whose babies were recruited into the PulseOx Study which investigated the test accuracy of pulse oximetry screening. Participants: A total of 119 mothers of babies with false-positive (FP) results, 15 with true-positive and 679 with true-negative results following screening. Main outcome measures: Questionnaires included measures of satisfaction with screening, anxiety, depression and perceptions of test results. Results: Participants were predominantly satisfied with screening. The anxiety of mothers given FP results was not significantly higher than that of mothers given true-negative results (median score 32.7 vs 30.0, p=0.09). White British/Irish mothers were more likely to participate in screening, with a decline rate of 5%; other ethnic groups were more likely to decline with the largest increase in declining being for Black African mothers (21%, OR 4.6, 95% CI 3.8 to 5.5). White British mothers were also less anxious (p<0.001) and more satisfied (p<0.001) than those of other ethnicities Conclusions: Pulse oximetry screening was acceptable to mothers and FP results were not found to increase anxiety. Factors leading to differences in participation and satisfaction across ethnic groups need to be identified so that staff can support parents appropriately.
Resumo:
Background: Screening for congenital heart defects (CHDs) relies on antenatal ultrasound and postnatal clinical examination; however, life-threatening defects often go undetected. Objective: To determine the accuracy, acceptability and cost-effectiveness of pulse oximetry as a screening test for CHDs in newborn infants. Design: A test accuracy study determined the accuracy of pulse oximetry. Acceptability of testing to parents was evaluated through a questionnaire, and to staff through focus groups. A decision-analytic model was constructed to assess cost-effectiveness. Setting: Six UK maternity units. Participants: These were 20,055 asymptomatic newborns at = 35 weeks’ gestation, their mothers and health-care staff. Interventions: Pulse oximetry was performed prior to discharge from hospital and the results of this index test were compared with a composite reference standard (echocardiography, clinical follow-up and follow-up through interrogation of clinical databases). Main outcome measures: Detection of major CHDs – defined as causing death or requiring invasive intervention up to 12 months of age (subdivided into critical CHDs causing death or intervention before 28 days, and serious CHDs causing death or intervention between 1 and 12 months of age); acceptability of testing to parents and staff; and the cost-effectiveness in terms of cost per timely diagnosis. Results: Fifty-three of the 20,055 babies screened had a major CHD (24 critical and 29 serious), a prevalence of 2.6 per 1000 live births. Pulse oximetry had a sensitivity of 75.0% [95% confidence interval (CI) 53.3% to 90.2%] for critical cases and 49.1% (95% CI 35.1% to 63.2%) for all major CHDs. When 23 cases were excluded, in which a CHD was already suspected following antenatal ultrasound, pulse oximetry had a sensitivity of 58.3% (95% CI 27.7% to 84.8%) for critical cases (12 babies) and 28.6% (95% CI 14.6% to 46.3%) for all major CHDs (35 babies). False-positive (FP) results occurred in 1 in 119 babies (0.84%) without major CHDs (specificity 99.2%, 95% CI 99.0% to 99.3%). However, of the 169 FPs, there were six cases of significant but not major CHDs and 40 cases of respiratory or infective illness requiring medical intervention. The prevalence of major CHDs in babies with normal pulse oximetry was 1.4 (95% CI 0.9 to 2.0) per 1000 live births, as 27 babies with major CHDs (6 critical and 21 serious) were missed. Parent and staff participants were predominantly satisfied with screening, perceiving it as an important test to detect ill babies. There was no evidence that mothers given FP results were more anxious after participating than those given true-negative results, although they were less satisfied with the test. White British/Irish mothers were more likely to participate in the study, and were less anxious and more satisfied than those of other ethnicities. The incremental cost-effectiveness ratio of pulse oximetry plus clinical examination compared with examination alone is approximately £24,900 per timely diagnosis in a population in which antenatal screening for CHDs already exists. Conclusions: Pulse oximetry is a simple, safe, feasible test that is acceptable to parents and staff and adds value to existing screening. It is likely to identify cases of critical CHDs that would otherwise go undetected. It is also likely to be cost-effective given current acceptable thresholds. The detection of other pathologies, such as significant CHDs and respiratory and infective illnesses, is an additional advantage. Other pulse oximetry techniques, such as perfusion index, may enhance detection of aortic obstructive lesions.
Resumo:
Aim: To evaluate the performance of an aspheric diffractive multifocal acrylic intraocular lens (IOL), ZMB00 1-Piece Tecnis. Setting: Five sites across Europe. Methods: Fifty-two patients with cataracts (average age 68.5±10.5 years, 35 female) were bilaterally implanted with the aspheric diffractive multifocal IOL after completing a questionnaire regarding their optical visual symptoms, use of visual correction and their visual satisfaction. The questionnaire was completed again 4-6 months after surgery along with measures of uncorrected and best-corrected distance and near visual acuity, under photopic and mesopic lighting, reading ability, defocus curve testing and ocular examination for adverse events. Results: The residual refractive error was 0.01±0.47D with 56% of eyes within ±0.25D and 97% within ±1.0D. Uncorrected visual acuity was 0.02±0.10logMAR at distance and 0.15±0.30 logMAR at near, only reducing to 0.07±0.10logMAR at distance and 0.21±0.25logMAR at near in mesopic conditions.The defocus curve showed a near addition between 2.5-3.0 D allowing a reading acuity of 0.08±0.13 logMAR, with a range of clear vision <0.3 logMAR of ∼4.0 D. The average reading speed was 121.4±30.8 words per minute. Spectacle independence was 100% for distance and 88% for near, with high levels of satisfaction reported. Overall rating of vision without glasses could be explained (r=0.760) by preoperative best-corrected distance acuity, postoperative reading acuity and postoperative uncorrected distance acuity in photopic conditions (p<0.001). Only two minor adverse events occurred. Conclusions: The ZMB00 1-Piece Tecnis multifocal IOL provides a good visual outcome at distance and near with minimal adverse effects.
Resumo:
Objectives: To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design: Mixed method evaluation involving five substudies, before and after design. Setting: NHS hospitals in United Kingdom. Participants: Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention: The SPI1 was a compound (multicomponent) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results: Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration - monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items) - there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from17%(63) to13%(49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.
Resumo:
Noxious stimuli in the esophagus cause pain that is referred to the anterior chest wall because of convergence of visceral and somatic afferents within the spinal cord. We sought to characterize the neurophysiological responses of these convergent spinal pain pathways in humans by studying 12 healthy subjects over three visits (V1, V2, and V3). Esophageal pain thresholds (Eso-PT) were assessed by electrical stimulation and anterior chest wall pain thresholds (ACW-PT) by use of a contact heat thermode. Esophageal evoked potentials (EEP) were recorded from the vertex following 200 electrical stimuli, and anterior chest wall evoked potentials (ACWEP) were recorded following 40 heat pulses. The fear of pain questionnaire (FPQ) was administered on V1. Statistical data are shown as point estimates of difference +/- 95% confidence interval. Pain thresholds increased between V1 and V3 [Eso-PT: V1-V3 = -17.9 mA (-27.9, -7.9) P < 0.001; ACW-PT: V1-V3 = -3.38 degrees C (-5.33, -1.42) P = 0.001]. The morphology of cortical responses from both sites was consistent and equivalent [P1, N1, P2, N2 complex, where P1 and P2 are is the first and second positive (downward) components of the CEP waveform, respectively, and N1 and N2 are the first and second negative (upward) components, respectively], indicating activation of similar cortical networks. For EEP, N1 and P2 latencies decreased between V1 and V3 [N1: V1-V3 = 13.7 (1.8, 25.4) P = 0.02; P2: V1-V3 = 32.5 (11.7, 53.2) P = 0.003], whereas amplitudes did not differ. For ACWEP, P2 latency increased between V1 and V3 [-35.9 (-60, -11.8) P = 0.005] and amplitudes decreased [P1-N1: V1-V3 = 5.4 (2.4, 8.4) P = 0.01; P2-N2: 6.8 (3.4, 10.3) P < 0.001]. The mean P1 latency of EEP over three visits was 126.6 ms and that of ACWEP was 101.6 ms, reflecting afferent transmission via Adelta fibers. There was a significant negative correlation between FPQ scores and Eso-PT on V1 (r = -0.57, P = 0.05). These data provide the first neurophysiological evidence of convergent esophageal and somatic pain pathways in humans.
Resumo:
Background: Qualitative research has suggested that spousal carers of someone with dementia differ in terms of whether they perceive their relationship with that person as continuous with the premorbid relationship or as radically different, and that a perception of continuity may be associated with more person-centered care and the experience of fewer of the negative emotions associated with caring. The aim of the study was to develop and evaluate a quantitative measure of the extent to which spousal carers perceive the relationship to be continuous. Methods: An initial pool of 42 questionnaire items was generated on the basis of the qualitative research about relationship continuity. These were completed by 51 spousal carers and item analysis was used to reduce the pool to 23 items. The retained items, comprising five subscales, were then administered to a second sample of 84 spousal carers, and the questionnaire's reliability, discriminative power, and validity were evaluated. Results: The questionnaire showed good reliability: Cronbach's α for the full scale was 0.947, and test-retest reliability was 0.932. Ferguson's δ was 0.987, indicating good discriminative power. Evidence of construct validity was provided by predicted patterns of subscale correlations with the Closeness and Conflict Scale and the Marwit-Meuser Caregiver Grief Inventory. Conclusion: Initial psychometric evaluation of the measure was encouraging. The measure provides a quantitative means of investigating ideas from qualitative research about the role of relationship continuity in influencing how spousal carers provide care and how they react emotionally to their caring role. © 2012 International Psychogeriatric Association.