987 resultados para Health Sciences, Nutrition|Health Sciences, Public Health|Education, Health


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Health Belief Model (HBM) provided the theoretical framework for examining Universal Precautions (UP) compliance factors by Emergency Department nurses. A random sample of Emergency Nurses Association (ENA) clinical nurses (n = 900) from five states (New York, New Jersey, California, Texas, and Florida), were surveyed to explore the factors related to their decision to comply with UP. Five-hundred-ninety-eight (598) useable questionnaires were analyzed. The responders were primarily female (84.9%), hospital based (94.6%), staff nurses (66.6%) who had a mean 8.5 years of emergency nursing experience. The nurses represented all levels of hospitals from rural (4.5%) to urban trauma centers (23.7%). The mean UP training hours was 3.0 (range 0-38 hours). Linear regression was used to analyze the four hypotheses. The first hypothesis evaluating perceived susceptibility and seriousness with reported UP use was not significant (p = $>$.05). Hypothesis 2 tested perceived benefits with internal and external barriers. Both perceived benefits and internal barriers as well as the overall regression were significant (F = 26.03, p = $<$0.001). Hypothesis 3 which tested modifying factors, cues to action, select demographic variables, and the main effects of the HBM with self reported UP compliance, was also significant (F = 12.39, p = $<$0.001). The additive effects were tested by use of a stepwise regression that assessed the contribution of each of the significant variables. The regression was significant (F = 12.39, p = $<$0.001) and explained 18% of the total variance. In descending order of contribution, the significant variables related to compliance were: internal barriers (t = $-$6.267; p = $<$0.001) such as the perception that because of the nature of the emergency care environment there is sometimes inadequate time to put on UP; cues to action (t = 3.195; p = 0.001) such as posted reminder signs or verbal reminders from peers; the number of Universal Precautions training hours (t = 3.667; p = $<$0.001) meaning that as the number of training hours increase so does compliance; perceived benefits (t = 3.466; p = 0.001) such as believing that UP will provide adequate barrier protection; and perceived susceptibility (t = 2.880; p = 0.004) such as feeling that they are at risk of exposure. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated the effects of patient variables (physical and cognitive disability, significant others' preference and social support) on nurses' nursing home placement decision-making and explored nurses' participation in the decision-making process.^ The study was conducted in a hospital in Texas. A sample of registered nurses on units that refer patients for nursing home placement were asked to review a series of vignettes describing elderly patients that differed in terms of the study variables and indicate the extent to which they agreed with nursing home placement on a five-point Likert scale. The vignettes were judged to have good content validity by a group of five colleagues (expert consultants) and test-retest reliability based on the Pearson correlation coefficient was satisfactory (average of.75) across all vignettes.^ The study tested the following hypotheses: Nurses have more of a propensity to recommend placement when (1) patients have severe physical disabilities; (2) patients have severe cognitive disabilities; (3) it is the significant others' preference; and (4) patients have no social support nor alternative services. Other hypotheses were that (5) a nurse's characteristics and extent of participation will not have a significant effect on their placement decision; and (6) a patient's social support is the most important, single factor, and the combination of factors of severe physical and cognitive disability, significant others' preference, and no social support nor alternative services will be the most important set of predictors of a nurse's placement decision.^ Analysis of Variance (ANOVA) was used to analyze the relationships implied in the hypothesis. A series of one-way ANOVA (bivariate analyses) of the main effects supported hypotheses one-five.^ Overall, the n-way ANOVA (multivariate analyses) of the main effects confirmed that social support was the most important single factor controlling for other variables. The 4-way interaction model confirmed that the most predictive combination of patient characteristics were severe physical and cognitive disability, no social support and the significant others did not desire placement. These analyses provided an understanding of the importance of the influence of specific patient variables on nurses' recommendations regarding placement. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study attempts to provide reliable scientific data that will enable the health services department of the Royal Commission of Yanbu Al Sinaiyah, Saudi Arabia to improve the quality of health care services provided in their facilities. Patient satisfaction and dissatisfaction were investigated along seven dimensions: General satisfaction scale, Communication, Technical quality, Art of care, Continuity of care, Time spent with the doctor, and Access/Convenience/ and availability. Patient satisfaction parameters were compared for Saudi vs. non-Saudi, males vs. females, and for patients seen in the hospital vs. those seen in Al-nawa and Radwa primary care centers. The information was obtained by using a self-administered questionnaire. The results indicate that patients seen in Al-nawa primary care center were more satisfied with care than patients seen in the hospital who in turn were more satisfied than those seen in Radwa primary care center. The non-Saudi patients were more satisfied than the Saudi patients across all three facilities and satisfaction scales. The female patients were more satisfied than the male patients across all three facilities and satisfaction scales. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study assessed if hospital-wide implementation of a needleless intravenous connection system reduces the number of reported percutaneous injuries, overall and those specifically due to intravenous connection activities.^ Incidence rates were compared before and after hospital-wide implementation of a needleless intravenous system at two hospitals, a full service general hospital and a pediatric hospital. The years 1989-1991 were designated as pre-implementation and 1993 was designated as post-implementation. Data from 1992 were not included in the effectiveness evaluation to allow employees to become familiar with use of the new device. The two hospitals showed rate ratios of 1.37 (95% CI = 1.22-1.54, p $\le$.0001) and 1.63 (95% CI = 1.34-1.97, p $\le$.0001), or a 27.1% and a 38.6% reduction in overall injury rate, respectively. Rate ratios for intravenous connection injuries were 2.67 (95% CI = 1.89-3.78, p $\le$.0001) and 3.35 (95% CI = 1.87-6.02, p $\le$.0001), or a 62.5% and a 69.9% reduction in injury rate, respectively. Rate ratios for all non-intravenous connection injuries were calculated to control for factors other than device implementation that may have been operating to reduce the injury rate. These rate ratios were lower, 1.21 and 1.44, demonstrating the magnitude of injury reduction due to factors other than device implementation. It was concluded that the device was effective in reduction of numbers of reported percutaneous injuries.^ Use-effectiveness of the system was also assessed by a survey of randomly selected device users to determine satisfaction with the device, frequency of use and barriers to use. Four hundred seventy-eight surveys were returned for a response rate of 50.9%. Approximately 94% of respondents at both hospitals expressed satisfaction with the needleless system and recommended continued use. The survey also revealed that even though over 50% of respondents report using the device "always" or "most of the time" for intravenous medication administration, flushing lines, and connecting secondary intravenous lines, needles were still being used for these same activities. Compatibility, accessibility and other technical problems were reported as reasons for using needles for these activities. These problems must be addressed, by both manufacturers and users, before the needleless system will be effective in prevention of all intravenous connection injuries. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study of ambulance workers for the emergency medical services of the City of Houston studied the factors related to shiftwork tolerance and intolerance. The EMS personnel work a 24-hour shift with rotating days of the week. Workers are assigned to A, B, C, D shift, each of which rotate 24-hours on, 24-hours off, 24-hours on and 4 days off. One-hundred and seventy-six male EMTs, paramedics and chauffeurs from stations of varying levels of activity were surveyed. The sample group ranged in age from 20 to 45. The average tenure on the job was 8.2 years. Over 68% of the workers held a second job, the majority of which worked over 20 hours a week at the second position.^ The survey instrument was a 20-page questionnaire modeled after the Folkard Standardized Shiftwork Index. In addition to demographic data, the survey tool provided measurements of general job satisfaction, sleep quality, general health complaints, morningness/eveningness, cognitive and somatic anxiety, depression, and circadian types. The survey questionnaire included an EMS-specific scaler of stress.^ A conceptual model of Shiftwork Tolerance was presented to identify the key factors examined in the study. An extensive list of 265 variables was reduced to 36 key variables that related to: (1) shift schedule and demographic/lifestyle factors, (2) individual differences related to traits and characteristics, and (3) tolerance/intolerance effects. Using the general job satisfaction scaler as the key measurement of shift tolerance/intolerance, it was shown that a significant relationship existed between this dependent variable and stress, number of years working a 24-hour shift, sleep quality, languidness/vigorousness. The usual amount of sleep received during the shift, general health complaints and flexibility/rigidity (R$\sp2$ =.5073).^ The sample consisted of a majority of morningness-types or extreme-morningness types, few evening-types and no extreme-evening types, duplicating the findings of Motohashi's previous study of ambulance workers. The level of activity by station was not significant on any of the dependent variables examined. However, the shift worked had a relationship with sleep quality, despite the fact that all shifts work the same hours and participate in the same rotation schedule. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to fully describe the construct of empowerment and to determine possible measures for this construct in racially and ethnically diverse neighborhoods, a qualitative study based on Grounded Theory was conducted at both the individual and collective levels. Participants for the study included 49 grassroots experts on community empowerment who were interviewed through semi-structured interviews and focus groups. The researcher also conducted field observations as part of the research protocol.^ The results of the study identified benchmarks of individual and collective empowerment and hundreds of possible markers of collective empowerment applicable in diverse communities. Results also indicated that community involvement is essential in the selection and implementation of proper measures. Additional findings were that the construct of empowerment involves specific principles of empowering relationships and particular motivational factors. All of these findings lead to a two dimensional model of empowerment based on the concepts of relationships among members of a collective body and the collective body's desire for socio-political change.^ These results suggest that the design, implementation, and evaluation of programs that foster empowerment must be based on collaborative ventures between the population being served and program staff because of the interactive, synergistic nature of the construct. In addition, empowering programs should embrace specific principles and processes of individual and collective empowerment in order to maximize their effectiveness and efficiency. And finally, the results suggest that collaboratively choosing markers to measure the processes and outcomes of empowerment in the main systems and populations living in today's multifaceted communities is a useful mechanism to determine change. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An agency is accountable to a legislative body in the implementation of public policy. It has a responsibility to ensure that the implementation of that policy is consistent with its statutory objectives.^ The analysis of the effectiveness of implementation of the Vendor Drug Program proceeded in the following manner. The federal and state roles and statutes pursuant to the formulation of the Vendor Drug Program were reviewed to determine statutory intent and formal provisions. The translation of these into programmatic details was examined focusing on the factors impacting the implementation process. Lastly, the six conditions outlined by Mazmanian and Sabatier as criteria for effective implementation, were applied to the implementation of the Vendor Drug Program to determine if the implementation was effective in relation to consistency with statutory objectives.^ The implementation of the statutes clearly met four of the six conditions for effective implementation: (1) clear and consistent objectives; (2) a valid causal theory; (3) structured the process to maximize agency and target compliance with the objectives; and (4) had continued support of constituency groups and sovereigns.^ The implementation was basically consistent with the statutory objectives, although the determination of vendor reimbursement has had and continues to have problems. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Congenital Adrenal Hyperplasia (CAH), due to 21-Hydroxylase deficiency, has an estimated incidence of 1:15,000 births and can result in death, salt-wasting crisis or impaired growth. It has been proposed that early diagnosis and treatment of infants detected from newborn screening for CAH will decrease the incidence of mortality and morbidity in the affected population. The Texas Department of Health (TDH) began mandatory screening for CAH in June, 1989 and Texas is one of fourteen states to provide neonatal screening for the disorder.^ The purpose of this study was to describe the cost and effect of screening for CAH in Texas during 1994 and to compare cases first detected by screen and first detected clinically between January 1, 1990 and December 31, 1994. This study used a longitudinal descriptive research design. The data was secondary and previously collected by the Texas Department of Health. Along with the descriptive study, an economic analysis was done. The cost of the program was defined, measured and valued for four phases of screening: specimen collection, specimen testing, follow-up and diagnostic evaluation.^ There were 103 infants with Classical CAH diagnosed during the study and 71 of the cases had the more serious Salt-Wasting form of the disease. Of the infants diagnosed with Classical CAH, 60% of the cases were first detected by screen and 40% were first detected because of clinical findings before the screening results were returned. The base case cost of adding newborn screening to an existing program (excluding the cost of specimen collection) was $357,989 for 100,000 infants. The cost per case of Classical CAH diagnosed, based on the number of infants first detected by screen in 1994, was \$126,892. There were 42 infants diagnosed with the more benign Nonclassical form of the disease. When these cases were included in the total, the cost per infant to diagnose Congenital Adrenal/Hyperplasia was $87,848. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increased use of vancomycin in hospitals has resulted in a standard practice to monitor serum vancomycin levels because of possible nephrotoxicity. However, the routine monitoring of vancomycin serum concentration is under criticism and the cost effectiveness of such routine monitoring is in question because frequent monitoring neither results in increase efficacy nor decrease nephrotoxicity. The purpose of the present study is to determine factors that may place patients at increased risk of developing vancomycin induced nephrotoxicity and for whom monitoring may be most beneficial.^ From September to December 1992, 752 consecutive in patients at The University of Texas M. D. Anderson Cancer Center, Houston, were prospectively evaluated for nephrotoxicity in order to describe predictive risk factors for developing vancomycin related nephrotoxicity. Ninety-five patients (13 percent) developed nephrotoxicity. A total of 299 patients (40 percent) were considered monitored (vancomycin serum levels determined during the course of therapy), and 346 patients (46 percent) were receiving concurrent moderate to highly nephrotoxic drugs.^ Factors that were found to be significantly associated with nephrotoxicity in univariate analysis were: gender, base serum creatinine greater than 1.5mg/dl, monitor, leukemia, concurrent moderate to highly nephrotoxic drugs, and APACHE III scores of 40 or more. Significant factors in the univariate analysis were then entered into a stepwise logistic regression analysis to determine independent predictive risk factors for vancomycin induced nephrotoxicity.^ Factors, with their corresponding odds ratios and 95% confidence limits, selected by stepwise logistic regression analysis to be predictive of vancomycin induced nephrotoxicity were: Concurrent therapy with moderate to highly nephrotoxic drugs (2.89; 1.76-4.74), APACHE III scores of 40 or more (1.98; 1.16-3.38), and male gender (1.98; 1.04-2.71).^ Subgroup (monitor and non-monitor) analysis showed that male (OR = 1.87; 95% CI = 1.01, 3.45) and moderate to highly nephrotoxic drugs (OR = 4.58; 95% CI = 2.11, 9.94) were significant for nephrotoxicity in monitored patients. However, only APACHE III score (OR = 2.67; 95% CI = 1.13,6.29) was significant for nephrotoxicity in non-monitored patients.^ The conclusion drawn from this study is that not every patient receiving vancomycin therapy needs frequent monitoring of vancomycin serum levels. Such routine monitoring may be appropriate in patients with one or more of the identified risk factors and low risk patients do not need to be subjected to the discomfort and added cost of multiple blood sampling. Such prudent selection of patients to monitor may decrease cost to patients and hospital. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many persons in the U.S. gain weight during young adulthood, and the prevalence of obesity has been increasing among young adults. Although obesity and physical inactivity are generally recognized as risk factors for coronary heart disease (CHD), the magnitude of their effect on risk may have been seriously underestimated due to failure to adequately handle the problem of cigarette smoking. Since cigarette smoking causes weight loss, physically inactive cigarette smokers may remain relatively lean because they smoke cigarettes. We hypothesize cigarette smoking modifies the association between weight gain during young adulthood and risk of coronary heart disease during middle age, and that the true effect of weight gain during young adulthood on risk of CHD can be assessed only in persons who have not smoked cigarettes. Specifically, we hypothesize that weight gain during young adulthood is positively associated with risk of CHD during middle-age in nonsmokers but that the association is much smaller or absent entirely among cigarette smokers. The purpose of this study was to test this hypothesis. The population for analysis was comprised of 1,934 middle-aged, employed men whose average age at the baseline examination was 48.7 years. Information collected at the baseline examinations in 1958 and 1959 included recalled weight at age 20, present weight, height, smoking status, and other CHD risk factors. To decrease the effect of intraindividual variation, the mean values of the 1958 and 1959 baseline examinations were used in analyses. Change in body mass index ($\Delta$BMI) during young adulthood was the primary exposure variable and was measured as BMI at baseline (kg/m$\sp2)$ minus BMI at age 20 (kg/m$\sp2).$ Proportional hazards regression analysis was used to generate relative risks of CHD mortality by category of $\Delta$BMI and cigarette smoking status after adjustment for age, family history of CVD, major organ system disease, BMI at age 20, and number of cigarettes smoked per day. Adjustment was not performed for systolic blood pressure or total serum cholesterol as these were regarded as intervening variables. Vital status was known for all men on the 25th anniversary of their baseline examinations. 705 deaths (including 319 CHD deaths) occurred over 40,136 person-years of experience. $\Delta$BMI was positively associated with risk of CHD mortality in never-smokers, but not in ever-smokers (p for interaction = 0.067). For never-smokers with $\Delta$BMI of stable, low gain, moderate gain, and high gain, adjusted relative risks were 1.00, 1.62, 1.61, and 2.78, respectively (p for trend = 0.010). For ever-smokers, with $\Delta$BMI of stable, low gain, moderate gain, and high gain, adjusted relative risks were 1.00, 0.74, 1.07, and 1.06, respectively (p for trend = 0.422). These results support the research hypothesis that cigarette smoking modifies the association between weight gain and CHD mortality. Current estimates of the magnitude of effect of obesity and physical inactivity on risk of coronary mortality may have been seriously underestimated due to inadequate handling of cigarette smoking. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A cohort of 418 United States Air Force (USAF) personnel from over 15 different bases deployed to Morocco in 1994. This was the first study of its kind and was designed with two primary goals: to determine if the USAF was medically prepared to deploy with its changing mission in the new world order, and to evaluate factors that might improve or degrade USAF medical readiness. The mean length of deployment was 21 days. The cohort was 95% male, 86% enlisted, 65% married, and 78% white.^ This study shows major deficiencies indicating the USAF medical readiness posture has not fully responded to meet its new mission requirements. Lack of required logistical items (e.g., mosquito nets, rainboots, DEET insecticide cream, etc.) revealed a low state of preparedness. The most notable deficiency was that 82.5% (95% CI = 78.4, 85.9) did not have permethrin pretreated mosquito nets and 81.0% (95% CI = 76.8, 84.6) lacked mosquito net poles. Additionally, 18% were deficient on vaccinations and 36% had not received a tuberculin skin test. Excluding injections, the overall compliance for preventive medicine requirements had a mean frequency of only 50.6% (95% CI = 45.36, 55.90).^ Several factors had a positive impact on compliance with logistical requirements. The most prominent was "receiving a medical intelligence briefing" from the USAF Public Health. After adjustment for mobility and age, individuals who underwent a briefing were 17.2 (95% CI = 4.37, 67.99) times more likely to have received an immunoglobulin shot and 4.2 (95% CI = 1.84, 9.45) times more likely to start their antimalarial prophylaxsis at the proper time. "Personnel on mobility" had the second strongest positive effect on medical readiness. When mobility and briefing were included in models, "personnel on mobility" were 2.6 (95% CI = 1.19, 5.53) times as likely to have DEET insecticide and 2.2 (95% CI = 1.16, 4.16) times as likely to have had a TB skin test.^ Five recommendations to improve the medical readiness of the USAF were outlined: upgrade base level logistical support, improve medical intelligence messages, include medical requirements on travel orders, place more personnel on mobility or only deploy personnel on mobility, and conduct research dedicated to capitalize on the powerful effect from predeployment briefings.^ Since this is the first study of its kind, more studies should be performed in different geographic theaters to assess medical readiness and establish acceptable compliance levels for the USAF. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. The aim of this study was to assess the independent risk of hepatitis C virus (HCV) infection in the development of hepatocellular carcinoma (HCC). The independent risk of hepatitis B virus (HBV), its interaction with hepatitis C virus and the association with other risk factors were examined.^ Methods. A hospital-based case-control study was conducted between January 1994 and December 1995. We enrolled 115 pathologically confirmed HCC patients and 230 nonliver cancer controls, who were matched by age ($\pm$5 years), gender, and year of diagnosis. Both cases and controls were recruited from The University of Texas M. D. Anderson Cancer Center at Houston. The risk factors were collected through personal interviews and blood samples were tested for HCV and HBV markers. Univariate and multivariate analyses were performed through conditional logistic regression.^ The prevalence of anti-HCV positive is 25.2% in HCC cases compared to 3.0% in controls. The univariate analysis showed that anti-HCV, HBsAg, alcohol drinking and cigarette smoking were significantly associated with HCC, however, family history of cancer, occupational chemical exposure, and use of oral contraceptive were not. Multivariate analysis revealed a matched odds ratio (OR) of 10.1 (95% CI 3.7-27.4) for anti-HCV, and an OR of 11.9 (95% CI 2.5-57.5) for HBsAg. However, dual infection of HCV and HBV had only a thirteen times increase in the risk of HCC, OR = 13.9 (95% CI 1.3-150.6). The estimated population attributable risk percent was 23.4% for HCV, 12.6% for HBV, and 5.3% for both viruses. Ever alcohol drinkers was positively associated with HCC, especially among daily drinkers, matched OR was 5.7 (95% CI 2.1-15.6). However, there was no significant increase in the risk of HCC among smokers as compared to nonsmokers. The mean age of HCC patients was significantly younger among the HBV(+) group and among the HCV(+)/HBV(+) group, when compared to the group of HCC patients with no viral markers. The association between past histories of blood transfusion, acupuncture, tattoo and IVDU was highly significant among the HCV(+) group and the HBV(+)/HCV(+) group, as compared to HCC patients with no viral markers. Forty percent of the HCC patients were pathologically or clinically diagnosed with liver cirrhosis. Anti-HCV(+) (OR = 3.6 95% CI 1.5-8.9) and alcohol drinking (OR = 2.7 95% CI 1.1-6.7), but not HBsAg, are the major risk factors for liver cirrhosis in HCC patients.^ Conclusion. Both hepatitis B virus and hepatitis C virus were independent risk factors for HCC. There was not enough evidence to determine the interaction between both viruses. Only daily alcoholic drinkers showed increasing risk for HCC development, as compared to nondrinkers. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A graphing method was developed and tested to estimate gestational ages pre-and postnatally in a consistent manner for epidemiological research and clinical purposes on feti/infants of women with few consistent prenatal estimators of gestational age. Each patient's available data was plotted on a single page graph to give a comprehensive overview of that patient. A hierarchical classification of gestational age determination was then applied in a systematic manner, and reasonable gestational age estimates were produced. The method was tested for validity and reliability on 50 women who had known dates for their last menstrual period or dates of conception, and multiple ultrasound examinations and other gestational age estimating measures. The feasibility of the procedure was then tested on 1223 low income women with few gestational age estimators. The graphing method proved to have high inter- and intrarater reliability. It was quick, easy to use, inexpensive, and did not require special equipment. The graphing method estimate of gestational age for each infant was tested against the last menstrual period gestational age estimate using paired t-Tests, F tests and the Kolmogorov-Smirnov test of similar populations, producing a 98 percent probability or better that the means and data populations were the same. Less than 5 percent of the infants' gestational ages were misclassified using the graphing method, much lower than the amount of misclassification produced by ultrasound or neonatal examination estimates. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chronic fatigue syndrome (CFS) is a recently defined condition characterized by severe disabling fatigue that persists for a minimum of six months, and a host of somatic and neurocognitive symptoms. Although conditions similar to CFS have been described in the medical literature for over 100 years, little is known about the epidemiology of CFS or of chronic fatigue generally. The San Francisco Fatigue Study was undertaken to describe the prevalence and characteristics of self-reported chronic fatigue and associated conditions in a diverse urban community. The study utilized a cross-sectional telephone survey of a random sample of households in San Francisco, followed by case/control interviews of fatigued and nonfatigued subjects. Respondents were classified as chronically fatigued (CF) if they reported severe fatigue lasting six months or longer, then further classified as having CFS-like illness if, based on self-reported information, their condition appeared to meet CFS case definition criteria. Subjects who reported idiopathic chronic fatigue that did not meet CFS criteria were classified as having ICF-like illness.^ 8004 households were screened, yielding fatigue and demographic information on 16970 residents. CF was reported by 635 persons, 3.7% of the study population. CFS-like illness was identified in 34 subjects (0.2%), and ICF-like illness in 259 subjects (1.6%). Logistic regression analysis indicated that prevalence odds ratios for CFS-like illness were significantly elevated for females compared to males (OR = 2.9), and in Blacks (OR = 2.9) and Native Americans (OR = 13.2) relative to Whites, but significantly lower in Asians (OR = 0.12). Above-average household income was protective for all categories of CF. CFS-like subjects reported more symptoms and were more severely disabled than ICF-like subjects, but the pattern of symptoms experienced by both groups was similar. In conclusion, unexplained chronic fatigue, including CFS-like illness, occurs in all sociodemographic groups, but may be most prevalent among persons with lower incomes and in some racial minorities. Future studies that include clinical evaluation of incident cases of CFS and ICF are required to further clarify the epidemiology of unexplained chronic fatigue in the population. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This exploratory study assesses the utility of substance abuse treatment as a strategy for preventing human immunodeficiency virus (HIV) transmission among injecting drug users (IDUs). Data analyzed in this study were collected in San Antonio, TX, 1989 through 1995 using both qualitative and quantitative methods. Qualitative data included ethnographic interviews with 234 active IDUs; quantitative data included baseline risk assessments and HIV screening plus interviews follow-up interviews administered approximately six months later to 823 IDUs participating in a Federally-funded AIDS community outreach demonstration project.^ Findings that have particularly important implications for substance abuse treatment as an HIV prevention strategy for IDUs are listed below. (1) IDUs who wanted treatment were significantly more likely to be daily heroin users. (2) IDUs who want treatment were significantly more likely to have been to treatment previously. (3) IDUs who wanted treatment at baseline reported significantly higher levels of HIV risk than IDUs who did not want treatment. (4) IDUs who went to treatment between their baseline and follow-up interviews reported significantly higher levels of HIV risk at baseline than IDUs who did not go to treatment. (5) IDUs who went to treatment between their baseline and follow-up interviews reported significantly greater decreases in injection-related HIV risk behaviors. (6) IDUs who went to treatment reported significantly greater decreases in sexual HIV risk behaviors than IDUs who did not go to treatment.^ This study also noted a number of factors that may limit the effectiveness of substance abuse treatment in reducing HIV risk among IDUs. Findings suggest that the impact of methadone maintenance on HIV risk behaviors among opioid dependent IDUs may be limited by the negative manner in which it is perceived by IDUs as well as other elements of society. One consequence of the negative perception of methadone maintenance held by many elements of society may be an unwillingness to provide public funding for an adequate number of methadone maintenance slots. Thus many IDUs who would be willing to enter methadone maintenance are unable to enter it and many IDUs who do enter it are forced to drop out prematurely. ^