18 resultados para United States. Nuclear Test Personnel Review.

em DigitalCommons@The Texas Medical Center


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the tragic events of September, 11 2001 the United States bioterrorism and disaster preparedness has made significant progress; yet, numerous research studies of nationwide hospital emergency response have found alarming shortcomings in surge capacity and training level of health care personnel in responding to bioterrorism incidents. The primary goals of this research were to assess hospital preparedness towards the threat of bioterrorist agents in the Southwest Region of the United States and provide recommendations for its improvement. Since little formal research has been published on the hospital preparedness of Oklahoma, Arizona, Texas and New Mexico, this research study specifically focused on the measurable factors affecting the respective states' resources and level of preparedness, such as funding, surge capacity and preparedness certification status.^ Over 300 citations of peer-reviewed articles and 17 Web sites were reviewed, of which 57 reports met inclusion criteria. The results of the systematic review highlighted key gaps in the existing literature and the key targets for future research, as well as identified strengths and weaknesses of the hospital preparedness in the Southwest states compared to the national average. ^ Based on the conducted research, currently, the Southwest states hospital systems are unable fully meet presidential preparedness mandates for emergency and disaster care: the staffed beds to 1,000 population value fluctuated around 1,5 across the states; funding for the hospital preparedness lags behind hospital costs by millions of dollars; and public health-hospital partnership in bioterrorism preparedness is quite weak as evident in lack of joint exercises and training. However, significant steps towards it are being made, including on-going hospital preparedness certification by the Joint Commission of Health Organization. Variations in preparedness levels among states signify that geographic location might determine a hospital level of bioterrorism preparedness as well, tending to favor bigger states such as Texas.^ Suggested recommendations on improvement of the hospital bioterrorism preparedness are consistent with the existing literature and include establishment and maintenance of solid partnerships between hospitals and public health agencies, conduction of joint exercises and drills for the health care personnel and key partners, improved state and federal funding specific to bioterrorism preparedness objectives, as well as on-going training of the clinical personnel on recognition of the bioterrorism agents.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A cohort of 418 United States Air Force (USAF) personnel from over 15 different bases deployed to Morocco in 1994. This was the first study of its kind and was designed with two primary goals: to determine if the USAF was medically prepared to deploy with its changing mission in the new world order, and to evaluate factors that might improve or degrade USAF medical readiness. The mean length of deployment was 21 days. The cohort was 95% male, 86% enlisted, 65% married, and 78% white.^ This study shows major deficiencies indicating the USAF medical readiness posture has not fully responded to meet its new mission requirements. Lack of required logistical items (e.g., mosquito nets, rainboots, DEET insecticide cream, etc.) revealed a low state of preparedness. The most notable deficiency was that 82.5% (95% CI = 78.4, 85.9) did not have permethrin pretreated mosquito nets and 81.0% (95% CI = 76.8, 84.6) lacked mosquito net poles. Additionally, 18% were deficient on vaccinations and 36% had not received a tuberculin skin test. Excluding injections, the overall compliance for preventive medicine requirements had a mean frequency of only 50.6% (95% CI = 45.36, 55.90).^ Several factors had a positive impact on compliance with logistical requirements. The most prominent was "receiving a medical intelligence briefing" from the USAF Public Health. After adjustment for mobility and age, individuals who underwent a briefing were 17.2 (95% CI = 4.37, 67.99) times more likely to have received an immunoglobulin shot and 4.2 (95% CI = 1.84, 9.45) times more likely to start their antimalarial prophylaxsis at the proper time. "Personnel on mobility" had the second strongest positive effect on medical readiness. When mobility and briefing were included in models, "personnel on mobility" were 2.6 (95% CI = 1.19, 5.53) times as likely to have DEET insecticide and 2.2 (95% CI = 1.16, 4.16) times as likely to have had a TB skin test.^ Five recommendations to improve the medical readiness of the USAF were outlined: upgrade base level logistical support, improve medical intelligence messages, include medical requirements on travel orders, place more personnel on mobility or only deploy personnel on mobility, and conduct research dedicated to capitalize on the powerful effect from predeployment briefings.^ Since this is the first study of its kind, more studies should be performed in different geographic theaters to assess medical readiness and establish acceptable compliance levels for the USAF. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The occurrence of waste pharmaceuticals has been identified and well documented in water sources throughout North America and Europe. Many studies have been conducted which identify the occurrence of various pharmaceutical compounds in these waters. This project is an extensive review of the documented evidence of this occurrence published in the scientific literature. This review was performed to determine if this occurrence has a significant impact on the environment and public health. This project and review found that pharmaceuticals such as sex hormone drugs, antibiotic drugs and antineoplastic/cytostatic agents as well as their metabolites have been found to occur in water sources throughout the United States at levels high enough to have noticeable impacts on human health and the environment. It was determined that the primary sources of this occurrence of pharmaceuticals were waste water effluent and solid wastes from sewage treatment plants, pharmaceutical manufacturing plants, healthcare and biomedical research facilities, as well as runoff from veterinary medicine applications (including aquaculture). ^ In addition, current public policies of US governmental agencies such as the Environmental Protection Agency (EPA), Food and Drug Administration (FDA), and Drug Enforcement Agency (DEA) have been evaluated to see if they are doing a sufficient job at controlling this issue. Specific recommendations for developing these EPA, FDA, and DEA policies have been made to mitigate, prevent, or eliminate this issue.^ Other possible interventions such as implementing engineering controls were also evaluated in order to mitigate, prevent and eliminate this issue. These engineering controls include implementing improved current treatment technologies such as the advancement and improvement of waste water treatment processes utilized by conventional sewage treatment and pharmaceutical manufacturing plants. In addition, administrative controls such as the use of “green chemistry” in drug synthesis and design were also explored and evaluated as possible alternatives to mitigate, prevent, or eliminate this issue. Specific recommendations for incorporating these engineering and administrative controls into the applicable EPA, FDA, and DEA policies have also been made.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gender and racial/ethnic disparities in colorectal cancer screening (CRC) has been observed and associated with income status, education level, treatment and late diagnosis. According to the American Cancer Society, among both males and females, CRC is the third most frequently diagnosed type of cancer and accounts for 10% of cancer deaths in the United States. Differences in CRC test use have been documented and limited to access to health care, demographics and health behaviors, but few studies have examined the correlates of CRC screening test use by gender. This present study examined the prevalence of CRC screening test use and assessed whether disparities are explained by gender and racial/ethnic differences. To assess these associations, the study utilized a cross-sectional design and examined the distribution of the covariates for gender and racial/ethnic group differences using the chi square statistic. Logistic regression was used to estimate the prevalence odds ratio and to adjust for the confounding effects of the covariates. ^ Results indicated there are disparities in the use of CRC screening test use and there were statistically significant difference in the prevalence for both FOBT and endoscopy screening between gender, χ2, p≤0.003. Females had a lower prevalence of endoscopy colorectal cancer screening than males when adjusting for age and education (OR 0.88, 95% CI 0.82–0.95). However, no statistically significant difference was reported between racial/ethnic groups, χ 2 p≤0.179 after adjusting for age, education and gender. For both FOBT and endoscopy screening Non-Hispanic Blacks and Hispanics had a lower prevalence of screening compared with Non-Hispanic Whites. In the multivariable regression model, the gender disparities could largely be explained by age, income status, education level, and marital status. Overall, individuals between the age "70–79" years old, were married, with some college education and income greater than $20,000 were associated with a higher prevalence of colorectal cancer screening test use within gender and racial/ethnic groups. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There have been three medical malpractice insurance "crises" in the United States over a time spanning roughly the past three decades (Poisson, 2004, p. 759-760). Each crisis is characterized by a number of common features, including rapidly increasing medical malpractice insurance premiums, cancellation of existing insurance policies, and a decreased willingness of insurers to offer or renew medical malpractice insurance policies (Poisson, 2004, p. 759-760). Given the recurrent "crises," many sources argue that medical malpractice insurance coverage has become too expensive a commodity—one that many physicians simply cannot afford (U.S. Department of Health and Human Services [HHS], 2002, p. 1-2; Physician Insurers Association of America [PIAA], 2003, p. 1; Jackiw, 2004, p. 506; Glassman, 2004, p. 417; Padget, 2003, p. 216). ^ The prohibitively high cost of medical liability insurance is said to limit the geographical areas and medical specializations in which physicians are willing to practice. As a result, the high costs of medical liability insurance are ultimately said to affect whether or not people have access to health care services. ^ In an effort to control the medical liability insurance crises—and to preserve or restore peoples' access to health care—every state in the United States has passed "at least some laws designed to reduce medical malpractice premium rates" (GAO, 2003, p.5-6). More recently, however, the United States has witnessed a push to implement federal reform of the medical malpractice tort system. Accordingly, this project focuses on federal medical malpractice tort reform. This project was designed to investigate the following specific question: Do the federal medical malpractice tort reform bills which passed in the House of Representatives between 1995 and 2005 differ in respect to their principle features? To answer this question, the text of the bills, law review articles, and reports from government and private agencies were analyzed. Further, a matrix was compiled to concisely summarize the principle features of the proposed federal medical malpractice tort reform bills. Insight gleaned from this investigation and matrix compilation informs discussion about the potential ramifications of enacting federal medical malpractice tort reform legislation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Public health efforts were initiated in the United States with legislative actions for enhancing food safety and ensuring pure drinking water. Some additional policy initiatives during the early 20th century helped organize and coordinate relief efforts for victims of natural disasters. By 1950's the federal government expanded its role for providing better health and safety to the communities, and its disaster relief activities became more structured. A rise in terrorism related incidents during the late 1990's prompted new proactive policy directions. The traditional policy and program efforts for rescue, recovery, and relief measures changed focus to include disaster preparedness and countermeasures against terrorism.^ The study took a holistic approach by analyzing all major disaster related policies and programs, in regard to their structure, process, and outcome. Study determined that United States has a strong disaster preparedness agenda and appropriate programs are in place with adequate policy support, and the country is prepared to meet all possible security challenges that may arise in the future. The man-made disaster of September 11th gave a major thrust to improve security and enhance preparedness of the country. These new efforts required large additional funding from the federal government. Most existing preparedness programs at the local and national levels are run with federal funds which is insufficient in some cases. This discrepancy arises from the fact that federal funding for disaster preparedness programs at present are not allocated by the level of risks to individual states or according to the risks that can be assigned to critical infrastructures across the country. However, the increased role of the federal government in public health affairs of the states is unusual, and opposed to the spirit of our constitution where sovereignty is equally divided between the federal government and the states. There is also shortage of manpower in public health to engage in disaster preparedness activities, despite some remarkable progress following the September 11th disaster.^ Study found that there was a significant improvement in knowledge and limited number of studies showed improvement of skills, increase in confidence and improvement in message-mapping. Among healthcare and allied healthcare professionals, short-term training on disaster preparedness increased knowledge and improved personal protective equipment use with some limited improvement in confidence and skills. However, due to the heterogeneity of these studies, the results and interpretation of this systematic review may be interpreted with caution.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A nested case-control study design was used to investigate the relationship between radiation exposure and brain cancer risk in the United States Air Force (USAF). The cohort consisted of approximately 880,000 men with at least 1 year of service between 1970 and 1989. Two hundred and thirty cases were identified from hospital discharge records with a diagnosis of primary malignant brain tumor (International Classification of Diseases, 9th revision, code 191). Four controls were exactly matched with each case on year of age and race using incidence density sampling. Potential career summary extremely low frequency (ELF) and microwave-radiofrequency (MWRF) radiation exposures were based upon the duration in each occupation and an intensity score assigned by an expert panel. Ionizing radiation (IR) exposures were obtained from personal dosimetry records.^ Relative to the unexposed, the overall age-race adjusted odds ratio (OR) for ELF exposure was 1.39, 95 percent confidence interval (CI) 1.03-1.88. A dose-response was not evident. The same was true for MWRF, although the OR = 1.59, with 95 percent CI 1.18-2.16. Excess risk was not found for IR exposure (OR = 0.66, 45 percent CI 0.26-1.72).^ Increasing socioeconomic status (SES), as identified by military pay grade, was associated with elevated brain tumor risk (officer vs. enlisted personnel age-race adjusted OR = 2.11, 95 percent CI 1.98-3.01, and senior officers vs. all others age-race adjusted OR = 3.30, 95 percent CI 2.0-5.46). SES proved to be an important confounder of the brain tumor risk associated with ELF and MWRF exposure. For ELF, the age-race-SES adjusted OR = 1.28, 95 percent CI 0.94-1.74, and for MWRF, the age-race-SES adjusted OR = 1.39, 95 percent CI 1.01-1.90.^ These results indicate that employment in Air Force occupations with potential electromagnetic field exposures is weakly, though not significantly, associated with increased risk for brain tumors. SES appeared to be the most consistent brain tumor risk factor in the USAF cohort. Other investigators have suggested that an association between brain tumor risk and SES may arise from differential access to medical care. However, in the USAF cohort health care is universally available. This study suggests that some factor other than access to medical care must underlie the association between SES and brain tumor risk. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and aim. Hepatitis B virus (HBV) and hepatitis C virus (HCV) co-infection is associated with increased risk of cirrhosis, decompensation, hepatocellular carcinoma, and death. Yet, there is sparse epidemiologic data on co-infection in the United States. Therefore, the aim of this study was to determine the prevalence and determinants of HBV co-infection in a large United States population of HCV patients. ^ Methods. The National Veterans Affairs HCV Clinical Case Registry was used to identify patients tested for HCV during 1997–2005. HCV exposure was defined as two positive HCV tests (antibody, RNA or genotype) or one positive test combined with an ICD-9 code for HCV. HCV infection was defined as only a positive HCV RNA or genotype. HBV exposure was defined as a positive test for hepatitis B core antibodies, hepatitis B surface antigen, HBV DNA, hepatitis Be antigen, or hepatitis Be antibody. HBV infection was defined as only a positive test for hepatitis B surface antigen, HBV DNA, or hepatitis Be antigen within one year before or after the HCV index date. The prevalence of exposure to HBV in patients with HCV exposure and the prevalence of HBV infection in patients with HCV infection were determined. Multivariable logistic regression was used to identify demographic and clinical determinants of co-infection. ^ Results. Among 168,239 patients with HCV exposure, 58,415 patients had HBV exposure for a prevalence of 34.7% (95% CI 34.5–35.0). Among 102,971 patients with HCV infection, 1,431 patients had HBV co-infection for a prevalence of 1.4% (95% CI 1.3–1.5). The independent determinants for an increased risk of HBV co-infection were male sex, positive HIV status, a history of hemophilia, sickle cell anemia or thalassemia, history of blood transfusion, cocaine and other drug use. Age >50 years and Hispanic ethnicity were associated with a decreased risk of HBV co-infection. ^ Conclusions. This is the largest cohort study in the United States on the prevalence of HBV co-infection. Among veterans with HCV, exposure to HBV is common (∼35%), but HBV co-infection is relatively low (1.4%). There is an increased risk of co-infection with younger age, male sex, HIV, and drug use, with decreased risk in Hispanics.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dengue fever is a strictly human and non-human primate disease characterized by a high fever, thrombocytopenia, retro-orbital pain, and severe joint and muscle pain. Over 40% of the world population is at risk. Recent re-emergence of dengue outbreaks in Texas and Florida following the re-introduction of competent Aedes mosquito vectors in the United States have raised growing concerns about the potential for increased occurrences of dengue fever outbreaks throughout the southern United States. Current deficiencies in vector control, active surveillance and awareness among medical practitioners may contribute to a delay in recognizing and controlling a dengue virus outbreak. Previous studies have shown links between low-income census tracts, high population density, and dengue fever within the United States. Areas of low-income and high population density that correlate with the distribution of Aedes mosquitoes result in higher potential for outbreaks. In this retrospective ecologic study, nine maps were generated to model U.S. census tracts’ potential to sustain dengue virus transmission if the virus was introduced into the area. Variables in the model included presence of a competent vector in the county and census tract percent poverty and population density. Thirty states, 1,188 counties, and 34,705 census tracts were included in the analysis. Among counties with Aedes mosquito infestation, the census tracts were ranked high, medium, and low risk potential for sustained transmission of the virus. High risk census tracts were identified as areas having the vector, ≥20% poverty, and ≥500 persons per square mile. Census tracts with either ≥20% poverty or ≥500 persons per square mile and have the vector present are considered moderate risk. Census tracts that have the vector present but have <20% poverty and <500 persons per square mile are considered low risk. Furthermore, counties were characterized as moderate risk if 50% or more of the census tracts in that county were rated high or moderate risk, and high risk if 25% or greater were rated high risk. Extreme risk counties, which were primarily concentrated in Texas and Mississippi, were considered having 50% or greater of the census tracts ranked as high risk. Mapping of geographic areas with potential to sustain dengue virus transmission will support surveillance efforts and assist medical personnel in recognizing potential cases. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Surgical site infections (SSIs) after abdominal surgeries account for approximately 26% of all reported SSIs. The Center for Disease Control and Prevention (CDC) defines 3 types of SSIs: superficial incisional, deep incisional, and organ/space. Preventing SSIs has become a national focus. This dissertation assesses several associations with the individual types of SSI in patients that have undergone colon surgery. ^ Methods: Data for this dissertation was obtained from the American College of Surgeons' National Surgical Quality Improvement Program (NSQIP); major colon surgeries were identified in the database that occurred between the time period of 2007 and 2009. NSQIP data includes more than 50 preoperative and 30 intraoperative factors; 40 collected postoperative occurrences are based on a follow-up period of 30 days from surgery. Initially, four individual logistic regressions were modeled to compare the associations between risk factors and each of the SSI groups: superficial, deep, organ/space and a composite of any single SSI. A second analysis used polytomous regression to assess simultaneously the associations between risk factors and the different types of SSIs, as well as, formally test the different effect estimates of 13 common risk factors for SSIs. The final analysis explored the association between venous thromboembolism (VTEs) and the different types of SSIs and risk factors. ^ Results: A total of 59,365 colon surgeries were included in the study. Overall, 13% of colon cases developed a single type of SSI; 8% of these were superficial SSIs, 1.4% was deep SSIs, and 3.8% were organ/space SSIs. The first article identifies the unique set of risk factors associated with each of the 4 SSI models. Distinct risk factors for superficial SSIs included factors, such as alcohol, chronic obstructive pulmonary disease, dyspnea and diabetes. Organ/space SSIs were uniquely associated with disseminated cancer, preoperative dialysis, preoperative radiation treatment, bleeding disorder and prior surgery. Risk factors that were significant in all models had different effect estimates. The second article assesses 13 common SSI risk factors simultaneously across the 3 different types of SSIs using polytomous regression. Then each risk factor was formally tested for the effect heterogeneity exhibited. If the test was significant the final model would allow for the effect estimations for that risk factor to vary across each type of SSI; if the test was not significant, the effect estimate would remain constant across the types of SSIs using the aggregate SSI value. The third article explored the relationship of venous thromboembolism (VTE) and the individual types of SSIs and risk factors. The overall incidence of VTEs after the 59,365 colon cases was 2.4%. All 3 types of SSIs and several risk factors were independently associated with the development of VTEs. ^ Conclusions: Risk factors associated with each type of SSI were different in patients that have undergone colon surgery. Each model had a unique cluster of risk factors. Several risk factors, including increased BMI, duration of surgery, wound class, and laparoscopic approach, were significant across all 4 models but no statistical inferences can be made about their different effect estimates. These results suggest that aggregating SSIs may misattribute and hide true associations with risk factors. Using polytomous regression to assess multiple risk factors with the multiple types of SSI, this study was able to identify several risk factors that had significant effect heterogeneity across the 3 types of SSI challenging the use of aggregate SSI outcomes. The third article recognizes the strong association between VTEs and the 3 types of SSIs. Clinicians understand the difference between superficial, deep and organ/space SSIs. Our results indicate that they should be considered individually in future studies.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research project is to determine whether there is a cost/benefit to allocating financial and other company-related resources to improve environmental, health and safety performance beyond that which is required by law. The issue of whether a company benefits from spending dollars to achieve environmental, health and safety performance beyond legal compliance is an important issue to the chemical manufacturing industry in the United States because of the voluminous and complex legal requirements impacting environmental, health and safety expenditures. The cost/benefit issue has practical significance because many U.S. chemical manufacturing companies base their environmental, health and safety management strategies on just achieving and maintaining compliance with legal requirements when in reality this strategy may actually be a higher cost way of managing environmental, health and safety practices. This difference in environmental, health and safety management strategy is being investigated to determine if managing environmental, health and safety to achieve performance beyond that which is required by law results in a greater benefit to companies in the U.S. chemical manufacturing sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The Enterococcus faecium genogroup, referred to as clonal complex 17 (CC17), seems to possess multiple determinants that increase its ability to survive and cause disease in nosocomial environments. METHODS: Using 53 clinical and geographically diverse US E. faecium isolates dating from 1971 to 1994, we determined the multilocus sequence type; the presence of 16 putative virulence genes (hyl(Efm), esp(Efm), and fms genes); resistance to ampicillin (AMP) and vancomycin (VAN); and high-level resistance to gentamicin and streptomycin. RESULTS: Overall, 16 different sequence types (STs), mostly CC17 isolates, were identified in 9 different regions of the United States. The earliest CC17 isolates were part of an outbreak that occurred in 1982 in Richmond, Virginia. The characteristics of CC17 isolates included increases in resistance to AMP, the presence of hyl(Efm) and esp(Efm), emergence of resistance to VAN, and the presence of at least 13 of 14 fms genes. Eight of 41 of the early isolates with resistance to AMP, however, were not in CC17. CONCLUSIONS: Although not all early US AMP isolates were clonally related, E. faecium CC17 isolates have been circulating in the United States since at least 1982 and appear to have progressively acquired additional virulence and antibiotic resistance determinants, perhaps explaining the recent success of this species in the hospital environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Up to 60% of U.S. visitors to Mexico develop traveler's diarrhea (TD), mostly due to enterotoxigenic Escherichia coli (ETEC) strains that produce heat-labile (LT) and/or heat-stable (ST) enterotoxins. Distinct single-nucleotide polymorphisms (SNPs) within the interleukin-10 (IL-10) promoter have been associated with high, intermediate, or low production of IL-10. We conducted a prospective study to investigate the association of SNPs in the IL-10 promoter and the occurrence of TD in ETEC LT-exposed travelers. Sera from U.S. travelers to Mexico collected on arrival and departure were studied for ETEC LT seroconversion by using cholera toxin as the antigen. Pyrosequencing was performed to genotype IL-10 SNPs. Stools from subjects who developed diarrhea were also studied for other enteropathogens. One hundred twenty-one of 569 (21.3%) travelers seroconverted to ETEC LT, and among them 75 (62%) developed diarrhea. Symptomatic seroconversion was more commonly seen in subjects who carried a genotype producing high levels of IL-10; it was seen in 83% of subjects with the GG genotype versus 54% of subjects with the AA genotype at IL-10 gene position -1082 (P, 0.02), in 71% of those with the CC genotype versus 33% of those with the TT genotype at position -819 (P, 0.005), and in 71% of those with the CC genotype versus 38% of those with the AA genotype at position -592 (P, 0.02). Travelers with the GCC haplotype were more likely to have symptomatic seroconversion than those with the ATA haplotype (71% versus 38%; P, 0.002). Travelers genetically predisposed to produce high levels of IL-10 were more likely to experience symptomatic ETEC TD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research examines prevalence of alcohol and illicit substance use in the United States and Mexico and associated socio-demographic characteristics. The sources of data for this study are public domain data from the U.S. National Household Survey of Drug Abuse, 1988 (n = 8814), and the Mexican National Survey of Addictions, 1988 (n = 12,579). In addition, this study discusses methodologic issues in cross-cultural and cross-national comparison of behavioral and epidemiologic data from population-based samples. The extent to which patterns of substance abuse vary among subgroups of the U.S. and Mexican populations is assessed, as well as the comparability and equivalence of measures of alcohol and drug use in these national samples.^ The prevalence of alcohol use was somewhat similar in the two countries for all three measures of use: lifetime, past year and past year heavy use, (85.0%, 68.1%, 39.6% and 72.6%, 47.7% and 45.8% for the U.S. and Mexico respectively). The use of illegal substances varied widely between countries, with U.S. respondents reporting significantly higher levels of use than their Mexican counterparts. For example, reported use of any illicit substance in lifetime and past year was 34.2%, 11.6 for the U.S., and 3.3% and 0.6% for Mexico. Despite these differences in prevalence, two demographic characteristics, gender and age, were important correlates of use in both countries. Men in both countries were more likely to report use of alcohol and illicit substances than women. Generally speaking, a greater proportion of respondents in both countries 18 years of age or older reported use of alcohol for all three measures than younger respondents; and a greater proportion of respondents between the ages of 18 and 34 years reported use of illicit substances during lifetime and past year than any other age group.^ Additional substantive research investigating population-based samples and at-risk subgroups is needed to understand the underlying mechanisms of these associations. Further development of cross-culturally meaningful survey methods is warranted to validate comparisons of substance use across countries and societies. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While clinical studies have shown a negative relationship between obesity and mental health in women, population studies have not shown a consistent association. However, many of these studies can be criticized regarding fatness level criteria, lack of control variables, and validity of the psychological variables.^ The purpose of this research was to elucidate the relationship between fatness level and mental health in United States women using data from the First National Health and Nutrition Examination Survey (NHANES I), which was conducted on a national probability sample from 1971 to 1974. Mental health was measured by the General Well-Being Schedule (GWB), and fatness level was determined by the sum of the triceps and subscapular skinfolds. Women were categorized as lean (15th percentile or less), normal (16th to 84th percentiles), or obese (85th percentile or greater).^ A conceptual framework was developed which identified the variables of age, race, marital status, socioeconomic status (education), employment status, number of births, physical health, weight history, and perception of body image as important to the fatness level-GWB relationship. Multiple regression analyses were performed separately for whites and blacks with GWB as the response variable, and fatness level, age, education, employment status, number of births, marital status, and health perception as predictor variables. In addition, 2- and 3-way interaction terms for leanness, obesity and age were included as predictor variables. Variables related to weight history and perception of body image were not collected in NHANES I, and thus were not included in this study.^ The results indicated that obesity was a statistically significant predictor of lower GWB in white women even when the other predictor variables were controlled. The full regression model identified the young, more educated, obese female as a subgroup with lower GWB, especially in blacks. These findings were not consistent with the previous non-clinical studies which found that obesity was associated with better mental health. The social stigma of being obese and the preoccupation of women with being lean may have contributed to the lower GWB in these women. ^