916 resultados para public use of history
Resumo:
Introduction. The National Behavioral HIV Surveillance (NHBS) is a self-reported cross-sectional survey that monitors the spread of human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS). The 2004 survey asked if the participant received a free condom, used it, and if receiving a free condom made him more likely to use a condom. The aim of this cross-sectional study is to examine the Houston MSA sub-dataset to determine if there was a self-expressed association between receiving a free condom and likelihood of using a condom at next intercourse, and to determine if the strength of that association varied by demographic subgroup.^ Methods. The Houston MSA 2004 NHBS had 502 participants who were men who have sex with men (MSM). The present analysis examined the answers to the questions: "In the past 12 months, have you received free condoms?" "Have you used any of the free condoms you received?" and "Did getting these free condoms make you more likely to use condoms during sex?".^ Results. Out of 502 participants, 500 answered the question about receiving free condoms, 406 (81.2%) answered all three questions, and 204 (50.2%) answered "yes" to all three questions. In the subgroup analyses, Hispanics were significantly less likely and men under 29 years of age were significantly more likely to report that their condom use behavior was influenced by receiving a free condom. ^ Conclusion. The effect of receipt of free condoms on likelihood of condom use varies by demographic subgroup, but these potentially important preliminary findings will require further investigation to validate them and further explicate the possible underlying dynamics.^
Resumo:
Objective. The risk of complications and deaths related to pneumococcal infections is high among high risk population (i.e. those with chronic diseases such as diabetes or asthma), despite current immunization recommendations. The aim of this study is to evaluate the use of pneumonia vaccine in adults with and without diabetes or asthma by year of age and whether immunization practices conform to policy recommendations. ^ Methods. Data were drawn from 2005 Behavioral Risk Factor Surveillance Study. Age specific estimated counts and proportions of pneumonia vaccination status were computed. The association of socio-demographic factors with vaccination status was estimated from multiple logistic regression and results were presented for adults (18-64yrs) and elderly (65 or older). ^ Results. Overall 12.3% of the adults and 61.5% of elderly reported ever received pneumonia vaccine. 66.8% of diabetics and 72.6% of asthmatics received the vaccine among elderly. 33.4% of diabetics and 21.6% of asthmatics received the vaccine among adults. These numbers are far away from Healthy people 2010 objective coverage rates of 90% for elderly and 60% for high risk adults. Though diabetes was one of the recommendations for the pneumonia vaccine still the status was less than 70% even at older ages. Although asthma was not an indication for pneumonia vaccine, asthmatics still achieved 50% level by an early age of 60 and reached up to 80% at as early as 75 years. In those having both asthma and diabetes, although the curve reaches to 50% level at a very early age of 40yrs, it is not stable until the age of 55 and percentages reached to as high as 90% in older ages. Odds of receiving pneumonia vaccine were high in individuals with diabetes or asthma in both the age groups. But the odds were stronger for diabetics in adults compared to those in the elderly [2.24 CI (2.08-2.42) and 1.32 CI (1.18-1.47)]. The odds were slightly higher in adults than in elderly for asthmatics [1.92 CI (1.80-2.04) and 1.73 CI (1.50-2.00)].The likelihood of vaccination also differed by gender, ethnicity, marital status, income category, having a health insurance, current employment, physician visit in last year, reporting of good to excellent health and flu vaccine status. ^ Conclusion. There is a very high proportion of high risk adults and elderly that remain unvaccinated. Given the proven efficacy and safety of vaccine there is a need for interventions targeting the barriers for under-vaccination with more emphasis on physician knowledge and practice as well as the recipient attitudes.^
Resumo:
Introduction. Food frequency questionnaires (FFQ) are used study the association between dietary intake and disease. An instructional video may potentially offer a low cost, practical method of dietary assessment training for participants thereby reducing recall bias in FFQs. There is little evidence in the literature of the effect of using instructional videos on FFQ-based intake. Objective. This analysis compared the reported energy and macronutrient intake of two groups that were randomized either to watch an instructional video before completing an FFQ or to view the same instructional video after completing the same FFQ. Methods. In the parent study, a diverse group of students, faculty and staff from Houston Community College were randomized to two groups, stratified by ethnicity, and completed an FFQ. The "video before" group watched an instructional video about completing the FFQ prior to answering the FFQ. The "video after" group watched the instructional video after completing the FFQ. The two groups were compared on mean daily energy (Kcal/day), fat (g/day), protein (g/day), carbohydrate (g/day) and fiber (g/day) intakes using descriptive statistics and one-way ANOVA. Demographic, height, and weight information was collected. Dietary intakes were adjusted for total energy intake before the comparative analysis. BMI and age were ruled out as potential confounders. Results. There were no significant differences between the two groups in mean daily dietary intakes of energy, total fat, protein, carbohydrates and fiber. However, a pattern of higher energy intake and lower fiber intake was reported in the group that viewed the instructional video before completing the FFQ compared to those who viewed the video after. Discussion. Analysis of the difference between reported intake of energy and macronutrients showed an overall pattern, albeit not statistically significant, of higher intake in the video before versus the video after group. Application of instructional videos for dietary assessment may require further research to address the validity of reported dietary intakes in those who are randomized to watch an instructional video before reporting diet compared to a control groups that does not view a video.^
Resumo:
Streptococcus mutans has been identified as the primary etiological agent of human dental caries. Since its identification, there has been research focused on the development of a vaccine to prevent this disease. Preliminary research has been conducted to test both active and passive vaccines for Streptococcus mutans in animals and humans. Although a vaccine for dental caries caused by Streptococcus mutans would most likely be administered to children, no testing of any type of dental caries vaccines has been conducted on children as of yet. The public health imperative for the development of a vaccine is great. Not only will a vaccine reduce the various consequences, but it would also improve quality of life for many individuals. Among the many possible vaccine antigen candidates, researchers have also been focusing on protein antigens, GTFs, and Gbps as possible candidates for a vaccine. There are also many routes of administration under research, with topical, oral, and intranasal showing a lot of promise. This review will provide an overview on the current state of research, present key factors influencing prevalence of caries, and summarize and discuss the results of animal and human studies on caries vaccines against Streptococcus mutans. The progress and obstacles facing the development of a vaccine to fight dental caries will also be discussed. ^
Resumo:
Research studies on the association between exposures to air contaminants and disease frequently use worn dosimeters to measure the concentration of the contaminant of interest. But investigation of exposure determinants requires additional knowledge beyond concentration, i.e., knowledge about personal activity such as whether the exposure occurred in a building or outdoors. Current studies frequently depend upon manual activity logging to record location. This study's purpose was to evaluate the use of a worn data logger recording three environmental parameters—temperature, humidity, and light intensity—as well as time of day, to determine indoor or outdoor location, with an ultimate aim of eliminating the need to manually log location or at least providing a method to verify such logs. For this study, data collection was limited to a single geographical area (Houston, Texas metropolitan area) during a single season (winter) using a HOBO H8 four-channel data logger. Data for development of a Location Model were collected using the logger for deliberate sampling of programmed activities in outdoor, building, and vehicle locations at various times of day. The Model was developed by analyzing the distributions of environmental parameters by location and time to establish a prioritized set of cut points for assessing locations. The final Model consisted of four "processors" that varied these priorities and cut points. Data to evaluate the Model were collected by wearing the logger during "typical days" while maintaining a location log. The Model was tested by feeding the typical day data into each processor and generating assessed locations for each record. These assessed locations were then compared with true locations recorded in the manual log to determine accurate versus erroneous assessments. The utility of each processor was evaluated by calculating overall error rates across all times of day, and calculating individual error rates by time of day. Unfortunately, the error rates were large, such that there would be no benefit in using the Model. Another analysis in which assessed locations were classified as either indoor (including both building and vehicle) or outdoor yielded slightly lower error rates that still precluded any benefit of the Model's use.^
Resumo:
Approximately 200,000 African children are born with sickle-cell anemia each year. Research has shown that individuals with hemoglobin disorders, particularly sickle-cell anemia, have increased susceptibility to contracting malaria. Currently it is recommended that patients diagnosed with sickle-cell anemia undergo malaria chemoprophylaxis in order to decrease their chances of malarial infection. However, studies have shown that routine administration of these drugs increases the risk of drug resistance and could possibly impair the development of naturally acquired immunity. Clinical trials have shown intermittent preventive treatment (IPT) to be an effective method of protection against malaria. The objective of this report was to review previously conducted clinical trials that study the effects of intermittent preventive treatment on malaria and anemia in infants and children. Based on the review, implications for its appropriateness as a protective measure against malaria for infants and children diagnosed with sickle-cell disease were provided.^ The 18 studies reviewed were randomized controlled trials that focused on IPT’s effect on malaria (7 studies), anemia (1 study), or both (8 studies). In addition to these 16, one study looks at IPT’s effect on molecular resistance to malaria, and another study is a follow-up to a study in order to review IPT’s potential to cause a rebound effect. The 18 th study in this review specifically looks at IPT’s protective efficacy in children with SCA. The studies in this report were restricted to randomized controlled trials that have been performed from 2000 to 2010. Reports on anemia were included to illustrate possible added benefits of the use of IPT specific to burdens associated with SCA other than malaria susceptibility. The outcomes of these studies address several issues of concern involving the administration of IPT: protective efficacy (in reference to age, seasonal versus perennial malaria regions, and overall effectiveness against malaria and anemia), drug resistance, drug rebound effect, drug side-effects, and long-term effects. Overall, these showed that IPT has a significant level of protective efficacy against malaria and/or anemia in children. More specifically, the IPT study evaluating children diagnosed with sickle-cell anemia proved IPT to be a more effective method of protection than traditional chemoprophylaxis. ^
Resumo:
Background. In public health preparedness, disaster preparedness refers to the strategic planning of responses to all types of disasters. Preparation and training for disaster response can be conducted using different teaching modalities, ranging from discussion-based programs such as seminars, drills and tabletop exercises to more complex operation-based programs such as functional exercises and full-scale exercises. Each method of instruction has its advantages and disadvantages. Tabletop exercises are facilitated discussions designed to evaluate programs, policies, and procedures; they are usually conducted in a classroom, often with tabletop props (e.g. models, maps or diagrams). ^ Objective. The overall goal of this project was to determine whether tabletop exercises are effective teaching modalities for disaster preparedness, with an emphasis on intentional chemical exposure. ^ Method. The target audience for the exercise was the Medical Reserve Brigade of the Texas State Guard, a group of volunteer healthcare providers and first responders who prepare for response to local disasters. A new tabletop exercise was designed to provide information on the complex, interrelated organizations within the national disaster preparedness program that this group would interact with in the event of a local disaster. This educational intervention consisted of a four hour multipart program that included a pretest of knowledge, lecture series, an interactive group discussion using a mock disaster scenario, a posttest of knowledge, and a course evaluation. ^ Results. Approximately 40 volunteers attended the intervention session; roughly half (n=21) had previously participated in a full scale drill. There was an 11% improvement in fund of knowledge between the pre- and post-test scores (p=0.002). Overall, the tabletop exercise was well received by those with and without prior training, with no significant differences found between these two groups in terms of relevance and appropriateness of content. However, the separate components of the tabletop exercise were variably effective, as gauged by written text comments on the questionnaire. ^ Conclusions. Tabletop exercises can be a useful training modality in disaster preparedness, as evidenced by improvement in knowledge and qualitative feedback on its value. Future offerings could incorporate recordings of participant responses during the drill, so that better feedback can be provided to them. Additional research should be conducted, using the same or similar design, in different populations that are stakeholders in disaster preparedness, so that the generalizability of these findings can be determined.^
Resumo:
Background. Pharmaceutical-sponsored patient assistance programs (PAPs) are charity programs that provide free or reduced-priced medications to eligible patients. PAPs have the potential to improve prescription drug accessibility for patients but currently there is limited information about their use and effectiveness. ^ Objectives and methods. This dissertation described the use of PAPs in the U.S. through the conduct of two studies: (1) a systematic review of primary studies of PAPs from commercially-published and “grey” literature sources; and (2) a retrospective, cross-sectional study of cancer patients' use of PAPs at a tertiary care cancer outpatient center. ^ Results. (1) The systematic review identified 33 studies: 15 evaluated the impact of PAP enrollment assistance programs on patient healthcare outcomes; 7 assessed institutional costs of providing enrollment assistance; 7 surveyed stakeholders; 4 examined other aspects. Standardized mean differences calculated for disease indicator outcomes (most of which were single group, pre-posttest designs) showed significant decreases in glycemic and lipid control, and inconsistent results for blood pressure. Grey literature abstracts reported insufficient statistics for calculations. Study heterogeneity made weighted summary estimates inappropriate. Economic analyses indicated positive financial benefits to institutions providing enrollment assistance (cost) compared to the wholesale value of the medications provided (benefit); analyses did not value health outcomes. Mean quality of reporting scores were higher for observational studies in commercially-published articles versus full text, grey literature reports. (2) The cross-sectional study found that PAP outpatients were significantly more likely to be uninsured, indigent, and < 65 years old than non-PAP patients. Nearly all non-PAP and PAP prescriptions were for non-cancer conditions, either for co-morbidities (e.g., hypertension) or the management of treatment side effects (e.g., pain). Oral chemotherapies from PAPs were significantly more likely to be for breast versus other cancers, and be a newer, targeted versus traditional chemotherapy.^ Conclusions. In outpatient settings, PAP enrollment assistance plus additional medication services (e.g., counseling, reminders, and free samples) is associated with improved disease indicators for patients. Healthcare institutions, including cancer centers, can offset financial losses from uncompensated drug costs and recoup costs invested in enrollment assistance programs by procuring free PAP medications. Cancer patients who are indigent and uninsured may be able to access more outpatient medications for their supportive care needs through PAPs, than for cancer treatment options like oral chemotherapies. Because of the selective availability of drugs through PAPs, there may be more options for newer, oral, targeted chemotherapies for the treatment breast cancer versus other for other cancers.^
Resumo:
Background. First synthesized in 1874, dichlorodiphenyltrichloroethane (DDT) was not used until the second half of World War II after its insecticidal properties were discovered in 1939. For decades DDT has been used globally with the intent of eradicating malaria. This began in 1955 when the eighth World Health Assembly launched a global campaign selecting DDT as the chemical of choice for the eradication of malaria. The United States banned DDT use in 1972 partially due to the publication of “Silent Spring” by Rachel Carson in 1962 which suggested that DDT was harmful to the environment, wildlife and is a carcinogen. ^ Objectives. To critically review the literature on DDT, and evaluate its importance in malaria prevention and control. Methods: The design of this systematic literature review is a narrative summary and evaluation of the papers reviewed. The data came from searches using PubMed and MEDLINE which are free and publicly available databases. Inclusive criteria that were considered during the search are English language peer reviewed journal articles published in the last 20 years. The keywords were: “insecticidal and agricultural use of DDT”, “human impact of malaria”, “economic impact of malaria”, “benefits of DDT”, “effects of DDT”, “importance of malaria control”, and alternatives to DDT for malaria control. ^ Results. Malaria continues to be one of the most common infectious diseases and creates a tremendous global public health problem. WHO recommends DDT for malaria vector control because compared to other pesticides, it is the most persistent in indoor spraying. ^ Conclusion. Indoor spraying of DDT in malaria endemic areas may cause increased exposure of the chemical to humans; however I conclude that the overall benefits outweigh the risks because more lives are saved due to fewer infections with malaria.^
Resumo:
Previous research has suggested an association between intimate partner violence and pregnancy intention status, and pregnancy intention status and the use of prenatal care services, however much of these studies have been conducted in high income countries (HIC) rather than low and middle income countries (LMIC). The objectives of this study were to examine the relationship between pregnancy intention status and intimate partner violence, and pregnancy intention status and the use of prenatal care among ever-married women in Jordan.^ Data were collected from a nationally representative sample of women interviewed in the 2007 Jordan Demographic and Health Survey. The sample was restricted to ever-married women, 15–49 years of age, who had a live birth within the five years preceding the survey. Multivariate logistic regression analyses was used to determine the relationship between intimate partner violence and pregnancy intention status, and pregnancy intention status and the use of prenatal care services.^ Women who reported a mistimed pregnancy (PORadj 1.96, 95% CI: 1.31–2.95), as well as an unwanted pregnancy (PORadj 1.32, 95% CI: 0.80–2.18) had a higher odds of experiencing lifetime physical and/or sexual abuse compared with women reporting a wanted pregnancy. Women not initiating prenatal care by the end of the first trimester had statistically significant higher odds of reporting both a mistimed (PORadj 2.07, 95% CI: 1.55–2.77) and unwanted pregnancy (PORadj 2.36, 95% CI: 1.68–3.31), compared with women initiating care in the first trimester. Additionally, women not receiving the adequate number of prenatal care visits for their last pregnancy had a higher odds of reporting an unwanted pregnancy (PORadj 2.11, 95% CI: 1.35–3.29) and mistimed pregnancy (POR adj 1.41, 95% CI: 0.96–2.07).^ Reducing intimate partner violence may decrease the prevalence of mistimed or unwanted pregnancies, and reducing both unwanted and mistimed pregnancies may decrease the prevalence of women not receiving timely and adequate prenatal care among women in this population. Further research, particularly in LMIC, is needed regarding the determinants of unintended pregnancy and its association with intimate partner violence as well as with the use of prenatal care services. ^
Resumo:
Privately practicing health care practitioners, such as physicians, dentists, and optometrists are facing increasing competitive pressures as the health care industry undergoes significant structural change. The eye care field has been affected by this change and one result has been the establishment of consultation/comanagement centers for optometrists. These centers, staffed primarily by an ophthalmologist, serve community optometrists as a secondary ophthalmic care center and are altering the traditional optometric - ophthalmologic referral system.^ This study was designed to examine the response of optometrists to the formation of a center by measuring the amount and type of optometric participation in a center and identifying factors affecting participation. A predictive model was specified to determine the probability of center use by practitioners.^ The results showed that the establishment of a center in a community did not result in its usage by all practitioners though there were specific practice (organizational) and practitioners (decision-maker) variables that could be used to predict use. Three practice variables and four practitioner variables were found to be important in influencing center use. ^
Resumo:
It has been hypothesized that results from the short term bioassays will ultimately provide information that will be useful for human health hazard assessment. Although toxicologic test systems have become increasingly refined, to date, no investigator has been able to provide qualitative or quantitative methods which would support the use of short term tests in this capacity.^ Historically, the validity of the short term tests have been assessed using the framework of the epidemiologic/medical screens. In this context, the results of the carcinogen (long term) bioassay is generally used as the standard. However, this approach is widely recognized as being biased and, because it employs qualitative data, cannot be used in the setting of priorities. In contrast, the goal of this research was to address the problem of evaluating the utility of the short term tests for hazard assessment using an alternative method of investigation.^ Chemical carcinogens were selected from the list of carcinogens published by the International Agency for Research on Carcinogens (IARC). Tumorigenicity and mutagenicity data on fifty-two chemicals were obtained from the Registry of Toxic Effects of Chemical Substances (RTECS) and were analyzed using a relative potency approach. The relative potency framework allows for the standardization of data "relative" to a reference compound. To avoid any bias associated with the choice of the reference compound, fourteen different compounds were used.^ The data were evaluated in a format which allowed for a comparison of the ranking of the mutagenic relative potencies of the compounds (as estimated using short term data) vs. the ranking of the tumorigenic relative potencies (as estimated from the chronic bioassays). The results were statistically significant (p $<$.05) for data standardized to thirteen of the fourteen reference compounds. Although this was a preliminary investigation, it offers evidence that the short term test systems may be of utility in ranking the hazards represented by chemicals which may be human carcinogens. ^
Resumo:
The purpose of this study was to evaluate the adequacy of computerized vital records in Texas for conducting etiologic studies on neural tube defects (NTDs), using the revised and expanded National Centers for Health Statistics vital record forms introduced in Texas in 1989.^ Cases of NTDs (anencephaly and spina bifida) among Harris County (Houston) residents were identified from the computerized birth and death records for 1989-1991. The validity of the system was then measured against cases ascertained independently through medical records and death certificates. The computerized system performed poorly in its identification of NTDs, particularly for anencephaly, where the false positive rate was 80% with little or no improvement over the 3-year period. For both NTDs the sensitivity and predictive value positive of the tapes were somewhat higher for Hispanic than non-Hispanic mothers.^ Case control studies were conducted utilizing the tape set and the independently verified data set, using controls selected from the live birth tapes. Findings varied widely between the data sets. For example, the anencephaly odds ratio for Hispanic mothers (vs. non-Hispanic) was 1.91 (CI = 1.38-2.65) for the tape file, but 3.18 (CI = 1.81-5.58) for verified records. The odds ratio for diabetes was elevated for the tape set (OR = 3.33, CI = 1.67-6.66) but not for verified cases (OR = 1.09, CI = 0.24-4.96), among whom few mothers were diabetic. It was concluded that computerized tapes should not be solely relied on for NTD studies.^ Using the verified cases, Hispanic mother was associated with spina bifida, and Hispanic mother, teen mother, and previous pregnancy terminations were associated with anencephaly. Mother's birthplace, education, parity, and diabetes were not significant for either NTD.^ Stratified analyses revealed several notable examples of statistical interaction. For anencephaly, strong interaction was observed between Hispanic origin and trimester of first prenatal care.^ The prevalence was 3.8 per 10,000 live births for anencephaly and 2.0 for spina bifida (5.8 per 10,000 births for the combined categories). ^
Resumo:
Because of its simplicity and low cost, arm circumference (AC) is being used increasingly in screening for protein energy malnutrition among pre-school children in many parts of the developing world, especially where minimally trained health workers are employed. The objectives of this study were as follows: (1) To determine the relationship of the AC measure with weight for age and weight for height in the detection of malnutrition among pre-school children in a Guatemalan Indian village. (2) To determine the performance of minimally trained promoters under field conditions in measuring AC, weight and height. (3) To describe the practical aspects of taking AC measures versus weight, age and height.^ The study was conducted in San Pablo La Laguna, one of four villages situated on the shores of Lake Atitlan, Guatemala, in which a program of simplified medical care was implemented by the Institute for Nutrition for Central America and Panama (INCAP). Weight, height, AC and age data were collected for 144 chronically malnourished children. The measurements obtained by the trained investigator under the controlled conditions of the health post were correlated against one another and AC was found to have a correlation with weight for age of 0.7127 and with weight for height of 0.7911, both well within the 0.65 to 0.80 range reported in the literature. False positive and false negative analysis showed that AC was more sensitive when compared with weight for height than with weight for age. This was fortunate since, especially in areas with widespread chronic malnutrition, weight for height detects those acute cases in immediate danger of complicating illness or death. Moreover, most of the cases identified as malnourished by AC, but not by weight for height (false positives), were either young or very stunted which made their selection by AC better than weight for height. The large number of cases detected by weight for age, but not by AC (false negative rate--40%) were, however, mostly beyond the critical age period and had normal weight for heights.^ The performance of AC, weight for height and weight for age under field conditions in the hands of minimally trained health workers was also analyzed by correlating these measurements against the same criterion measurements taken under ideally controlled conditions of the health post. AC had the highest correlation with itself indicating that it deteriorated the least in the move to the field. Moreover, there was a high correlation between AC in the field and criterion weight for height (0.7509); this correlation was almost as high as that for field weight for height versus the same measure in the health post (0.7588). The implication is that field errors are so great for the compounded weight for height variable that, in the field, AC is about as good a predictor of the ideal weight for height measure.^ Minimally trained health workers made more errors than the investigator as exemplified by their lower intra-observer correlation coefficients. They consistently measured larger than the investigator for all measures. Also there was a great deal of variability between these minimally trained workers indicating that careful training and followup is necessary for the success of the AC measure.^ AC has many practical advantages compared to the other anthropometric tools. It does not require age data, which are often unreliable in these settings, and does not require sophisticated subtraction and two dimensional table-handling skills that weight for age and weight for height require. The measure is also more easily applied with less disturbance to the child and the community. The AC tape is cheap and not easily damaged or jarred out of calibration while being transported in rugged settings, as is often the case with weight scales. Moreover, it can be kept in a health worker's pocket at all times for continual use in a widespread range of settings. ^