973 resultados para intravenous drug administration
Resumo:
INTRODUCTION This report from the World Psychiatric Association Section on Pharmacopsychiatry examines the possible relationship of antiepileptic drugs with suicide-related clinical features and behaviors in patients with epilepsy. MATERIALS AND METHODS A systematic review of the MEDLINE search returned 1039 papers, of which only 8 were considered relevant. A critical analysis of the Food and Drug Administration (FDA) report on the increase risk for patients under antiepileptics to manifest suicidality is also included in this report. RESULTS The analysis of these studies revealed that the data are not supportive of the presence of a "class effect" on suicide-related behavior; on the contrary, there are some data suggesting such an effect concerning treatment with topiramate, lamotrigine, and levetiracetam for which further research is needed. DISCUSSION For the majority of people with epilepsy, anticonvulsant treatment is necessary and its failure for any reason is expected to have deleterious consequences. Therefore, clinicians should inform patients and their families of this increased risk of suicidal ideation and behavior, but should not overemphasize the issue. Specific subgroups of patients with epilepsy might be at a higher risk, and deserve closer monitoring and follow-up. Future research with antiepileptics should specifically focus on depression and suicidal thoughts.
Resumo:
Chronic administration of psychomotor stimulants has been reported to produce behavioral sensitization to its effects on motor activity. This adaptation may be related to the pathophysiology of recurrent psychiatric disorders. Since disturbances in circadian rhythms are also found in many of these disorders, the relationship between sensitization and chronobiological factors became of interest. Therefore, a computerized monitoring system investigated the following: whether repeated exposure to methylphenidate (MPD) and amphetamine (AMP) could produce sensitization to its locomotor effects in the rat; whether sensitization to MPD and AMP was dependent on the circadian time of drug administration; whether the baseline levels of locomotor activity would be effected by repeated exposure to MPD and AMP; whether the expression of a sensitized response could be affected by the photoperiod; and whether MK-801, a non-competitive NMDA antagonist, could disrupt the development of sensitization to MPD. Dawley rats were housed in test cages and motor activity was recorded continuously for 16 days. The first 2 days served as baseline for each rat, and on day 3 each rat received a saline injection. The locomotor response to 0.6, 2.5, or 10 mg/kg of MPD was tested on day 4, followed by five days of single injections of 2.5 mg/kg MPD (days 5–9). After five days without injection (days 10–14) rats were re-challenged (day 15) with the same doses they received on day 4. There were three separate dose groups ran at four different times of administration, 08:00, 14:00, 20:00, or 02:00 (i.e. 12 groups). The same protocol was conducted with AMP with the doses of 0.3, 0.6, and 1.2 mg/kg given on day 4 and 15, and 0.6 mg/kg AMP as the repeated dose on days 5 to 9. In the second set of experiments only sensitization to MPD was investigated. The expression of the sensitized response was dose-dependent and mainly observed with challenge of the lower dose groups. The development of sensitization to MPD and ANT was differentially time-dependent. For MPD, the most robust sensitization occurred during the light phase, with no sensitization during the middle of the dark phase. (Abstract shortened by UMI.) ^
Resumo:
Biotechnology refers to the broad set of techniques that allow genetic manipulation of organisms. The techniques of biotechnology have broad implications for many industries, however it promises the greatest innovations in the production of products regulated by the Food and Drug Administration (FDA). Like many other powerful new technologies, biotechnology may carry risks as well as benefits. Several of its applications have engendered fervent emotional reactions and raised serious ethical concerns, especially internationally. ^ First, in my paper I discuss the historical and technical background of biotechnology. Second, I examine the development of biotechnology in Europe, the citizens' response to genetically modified (“GM”) foods and the governments' response. Third, I examine the regulation of bioengineered products and foods in the United States. ^ In conclusion, there are various problems with the current status of regulation of GM foods in the United States. These are four basic flaws: (1) the Coordinated Framework allows for too much jurisdictional overlap of biotechnological foods, (2) GM foods are considered GRAS and consequently, are placed on the market without pre-market approval, (3) federal mandatory labeling of GM foods cannot occur until the question of whether or not nondisclosure of a genetic engineering production processes is misleading or material information and (4) an independent state-labeling scheme of GM foods will most likely impede interstate commerce. ^
Resumo:
Opioids remain the drugs of choice in chronic pain treatment, but opioid tolerance, defined as a decrease in analgesic effect after prolonged or repeated use, dramatically limits their clinical utility. Opioid tolerance has classically been studied by implanting spinal catheters in animals for drug administration. This procedure has significant morbidity and mortality, as well as causing an inflammatory response which decreases the potency of opioid analgesia and possibly affects tolerance development. Therefore, we developed and validated a new method, intermittent lumbar puncture (Dautzenberg et al.), for the study of opioid analgesia and tolerance. Using this method, opioid tolerance was reliably induced without detectable morbidity. The dose of morphine needed to induce analgesia and tolerance using this method was about 100-fold lower than that required when using an intrathecal catheter. Only slight inflammation was found at the injection site, dissipated within seven mm. ^ DAMGO, an opioid μ receptor agonist, has been reported to inhibit morphine tolerance, but results from different studies are inconclusive. We evaluated the effect of DAMGO on morphine tolerance using our newly-developed ILP method, as well as other intrathecal catheter paradigms. We found that co-administration of sub-analgesic DAMGO with morphine using ILP did not inhibit morphine tolerance, but instead blocked the analgesic effects of morphine. Tolerance to morphine still developed. Tolerance to morphine can only be blocked by sub-analgesic dose of DAMGO when administered in a lumbar catheter, but not in cervical catheter settings. ^ Finally, we evaluated the effects of Gabapentin (GBP) on analgesia and morphine tolerance. We demonstrated that GBP enhanced analgesia mediated by both subanalgesic and analgesic doses of morphine although GBP itself was not analgesic. GBP increased potency and efficacy of morphine. GBP inhibited the expression, but not the development, of morphine tolerance. GBP blocked tolerance to analgesic morphine but not to subanalgesic morphine. GBP reversed the expression of morphine tolerance even after tolerance was established. These studies may begin to provide new insights into mechanisms of morphine tolerance development and improve clinical chronic pain management. ^
Resumo:
Institutional Review Boards (IRBs) are the primary gatekeepers for the protection of ethical standards of federally regulated research on human subjects in this country. This paper focuses on what general, broad measures that may be instituted or enhanced to exemplify a "model IRB". This is done by examining the current regulatory standards of federally regulated IRBs, not private or commercial boards, and how many of those standards have been found either inadequate or not generally understood or followed. The analysis includes suggestions on how to bring about changes in order to make the IRB process more efficient, less subject to litigation, and create standardized educational protocols for members. The paper also considers how to include better oversight for multi-center research, increased centralization of IRBs, utilization of Data Safety Monitoring Boards when necessary, payment for research protocol review, voluntary accreditation, and the institution of evaluation/quality assurance programs. ^ This is a policy study utilizing secondary analysis of publicly available data. Therefore, the research for this paper focuses on scholarly medical/legal journals, web information from the Department of Health and Human Services, Federal Drug Administration, and the Office of the Inspector General, Accreditation Programs, law review articles, and current regulations applicable to the relevant portions of the paper. ^ Two issues are found to be consistently cited by the literature as major concerns. One is a need for basic, standardized educational requirements across all IRBs and its members, and secondly, much stricter and more informed management of continuing research. There is no federally regulated formal education system currently in place for IRB members, except for certain NIH-based trials. Also, IRBs are not keeping up with research once a study has begun, and although regulated to do so, it does not appear to be a great priority. This is the area most in danger of increased litigation. Other issues such as voluntary accreditation and outcomes evaluation are slowing gaining steam as the processes are becoming more available and more sought after, such as JCAHO accrediting of hospitals. ^ Adopting the principles discussed in this paper should promote better use of a local IRBs time, money, and expertise for protecting the vulnerable population in their care. Without further improvements to the system, there is concern that private and commercial IRBs will attempt to create a monopoly on much of the clinical research in the future as they are not as heavily regulated and can therefore offer companies quicker and more convenient reviews. IRBs need to consider the advantages of charging for their unique and important services as a cost of doing business. More importantly, there must be a minimum standard of education for all IRB members in the area of the ethical standards of human research and a greater emphasis placed on the follow-up of ongoing research as this is the most critical time for study participants and may soon lead to the largest area for litigation. Additionally, there should be a centralized IRB for multi-site trials or a study website with important information affecting the trial in real time. There needs to be development of standards and metrics to assess the performance of the IRBs for quality assurance and outcome evaluations. The boards should not be content to run the business of human subjects' research without determining how well that function is actually being carried out. It is important that federally regulated IRBs provide excellence in human research and promote those values most important to the public at large.^
Resumo:
In 1996, the Food and Drug Administration (FDA) mandated that beginning in January 1998, flour and other enriched grain products be fortified with 140 μg of folic acid per 100 g of grain to prevent neural tube defects (NTDs) that occur in approximately 1 in 1,000 pregnancies in the United States (U.S.). Although this program has demonstrated important public health effects, it is argued that current fortification levels may not be enough to prevent all folic acid-preventable NTD cases. This study reviews published literature, on folic acid fortification in the U.S. and countries with mandatory folic acid fortification programs reported after 1992 and through January 2008. Published studies are evaluated to determine if the current level of folic acid fortification in the U.S. is adequate to prevent the most common forms of NTDs (spina bifida and anencephaly), particularly among overweight and obese women. ^ Although consistent improvement in blood folate levels of child bearing age women is reported in almost all studies, the RBC folate concentration has not reached the level associated with the most significant reduction of risk for NTDs (906 nmol/L); approximately half of the potentially preventable NTDs are prevented by fortification at the current U.S. level. Furthermore, the blood folate status of women in higher BMI categories (obese or overweight) has not improved as much as among women in lower BMI categories. Therefore, women classified as overweight or obese have not benefited from the preventive effects of folic acid fortification as much as normal or underweight women. ^ To reduce risk of folate preventable NTDs, especially in overweight and obese women, it may be necessary to increase the current level of folic acid fortification. However, further research is required to determine the optimal levels of fortification to achieve this goal without causing adverse health effects in the general population. ^
Resumo:
Research examining programs designed to retain patients in health care focus on repeated interactions between outreach workers and patients (Bradford et al. 2007; Cheever 2007). The purpose of this study was to determine if patients who are peer-mentored at their intake exam remain in care longer and attend more physicians' visits than those who were not mentored. Using patients' medical records and a previously created mentor database, the study determined how many patients attended their intake visit but subsequently failed to establish regular care. The cohort study examined risk factors for establishing care, determined if patients lacking a peer mentor failed to establish care more than peer mentor assisted patients, and subsequently if peer mentored patients had better health outcomes. The sample consists of 1639 patients who were entered into the Thomas Street Patient Mentor Database between May 2005 and June 2007. The assignment to the mentored group was haphazardly conducted based on mentor availability. The data from the Mentor Database was then analyzed using descriptive statistical software (SPSS version 15; SPSS Inc., Chicago, Illinois, USA). Results indicated that patients who had a mentor at intake were more likely to return for primary care HIV visits at 90 and 180 days. Mentored patients also were more likely to be prescribed ART within 180 days from intake. Other risk factors that impacted remaining in care included gender, previous care status, time from diagnosis to intake visit, and intravenous drug use. Clinical health outcomes did not differ significantly between groups. This supports that mentoring did improve outcomes. Continuing to use peer-mentoring programs for HIV care may help in increasing retention of patients in care and improving patients' health in a cost effective manner. Future research on the effects of peer mentoring on mentors, and effects of concordance of mentor and patient demographics may help to further improve peer-mentoring programs. ^
Resumo:
Introduction. The HIV/AIDS disease burden disproportionately affects minority populations, specifically African Americans. While sexual risk behaviors play a role in the observed HIV burden, other factors including gender, age, socioeconomics, and barriers to healthcare access may also be contributory. The goal of this study was to determine how far down the HIV/AIDS disease process people of different ethnicities first present for healthcare. The study specifically analyzed the differences in CD4 cell counts at the initial HIV-1 diagnosis with respect to ethnicity. The study also analyzed racial differences in HIV/AIDS risk factors. ^ Methods. This is a retrospective study using data from the Adult Spectrum of HIV Disease (ASD), collected by the City of Houston Department of Health. The ASD database contains information on newly reported HIV cases in the Harris County District Hospitals between 1989 and 2000. Each patient had an initial and a follow-up report. The extracted variables of interest from the ASD data set were CD4 counts at the initial HIV diagnosis, race, gender, age at HIV diagnosis and behavioral risk factors. One-way ANOVA was used to examine differences in baseline CD4 counts at HIV diagnosis between racial/ethnic groups. Chi square was used to analyze racial differences in risk factors. ^ Results. The analyzed study sample was 4767. The study population was 47% Black, 37% White and 16% Hispanic [p<0.05]. The mean and median CD4 counts at diagnosis were 254 and 193 cells per ml, respectively. At the initial HIV diagnosis Blacks had the highest average CD4 counts (285), followed by Whites (233) and Hispanics (212) [p<0.001 ]. These statistical differences, however, were only observed with CD4 counts above 350 [p<0.001], even when adjusted for age at diagnosis and gender [p<0.05]. Looking at risk factors, Blacks were mostly affected by intravenous drug use (IVDU) and heterosexuality, whereas Whites and Hispanics were more affected by male homosexuality [ p<0.05]. ^ Conclusion. (1) There were statistical differences in CD4 counts with respect to ethnicity, but these differences only existed for CD4 counts above 350. These differences however do not appear to have clinical significance. Antithetically, Blacks had the highest CD4 counts followed by Whites and Hispanics. (2) 50% of this study group clinically had AIDS at their initial HIV diagnosis (median=193), irrespective of ethnicity. It was not clear from data analysis if these observations were due to failure of early HIV surveillance, HIV testing policies or healthcare access. More studies need to be done to address this question. (3) Homosexuality and bisexuality were the biggest risk factors for Whites and Hispanics, whereas for Blacks were mostly affected by heterosexuality and IVDU, implying a need for different public health intervention strategies for these racial groups. ^
Resumo:
Foodborne illness has always been with us, and food safety is an increasingly important public health issue affecting populations worldwide. In the United States of America, foodborne illness strikes millions of people and kills thousands annually, costing our economy billions of dollars in medical care expense and lost productivity. The nature of food and foodborne illness has changed dramatically in the last century. The regulatory systems have evolved to better assure a safe food supply. The food production industry has invested heavily to meet regulatory requirement and to improve the safety of their products. Educational efforts have increased public awareness of safe food handling practices, empowering consumers to fulfill their food safety role. Despite the advances made, none of the Healthy People 2010 targets for reduction of foodborne pathogens has been reached. There is no single solution to eliminating pathogen contamination from all classes of food products. However, irradiation seems especially suited for certain higher-risk foods such as meat and poultry and its use should advance the goal of reducing foodborne illness by minimizing the presence of pathogenic organisms in the food supply. This technology has been studied extensively for over 50 years. The Food and Drug Administration has determined that food irradiation is safe for use as approved by the Agency. It is time to take action to educate consumers about the benefits of food irradiation. Consumer demand will compel industry to meet demand by investing in facilities and processes to assure a consistent supply of irradiated food products. ^
Resumo:
The Federal Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid (CMS) play key roles in making Class III, medical devices available to the public, and they are required by law to meet statutory deadlines for applications under review. Historically, both agencies have failed to meet their respective statutory requirements. Since these failures affect patient access and may adversely impact public health, Congress has enacted several “modernization” laws. However, the effectiveness of these modernization laws has not been adequately studied or established for Class III medical devices. ^ The aim of this research study was, therefore, to analyze how these modernization laws may have affected public access to medical devices. Two questions were addressed: (1) How have the FDA modernization laws affected the time to approval for medical device premarket approval applications (PMAs)? (2) How has the CMS modernization law affected the time to approval for national coverage decisions (NCDs)? The data for this research study were collected from publicly available databases for the period January 1, 1995, through December 31, 2008. These dates were selected to ensure that a sufficient period of time was captured to measure pre- and post-modernization effects on time to approval. All records containing original PMAs were obtained from the FDA database, and all records containing NCDs were obtained from the CMS database. Source documents, including FDA premarket approval letters and CMS national coverage decision memoranda, were reviewed to obtain additional data not found in the search results. Analyses were conducted to determine the effects of the pre- and post-modernization laws on time to approval. Secondary analyses of FDA subcategories were conducted to uncover any causal factors that might explain differences in time to approval and to compare with the primary trends. The primary analysis showed that the FDA modernization laws of 1997 and 2002 initially reduced PMA time to approval; after the 2002 modernization law, the time to approval began increasing and continued to increase through December 2008. The non-combined, subcategory approval trends were similar to the primary analysis trends. The combined, subcategory analysis showed no clear trends with the exception of non-implantable devices, for which time to approval trended down after 1997. The CMS modernization law of 2003 reduced NCD time to approval, a trend that continued through December 2008. This study also showed that approximately 86% of PMA devices do not receive NCDs. ^ As a result of this research study, recommendations are offered to help resolve statutory non-compliance and access issues, as follows: (1) Authorities should examine underlying causal factors for the observed trends; (2) Process improvements should be made to better coordinate FDA and CMS activities to include sharing data, reducing duplication, and establishing clear criteria for “safe and effective” and “reasonable and necessary”; (3) A common identifier should be established to allow tracking and trending of applications between FDA and CMS databases; (4) Statutory requirements may need to be revised; and (5) An investigation should be undertaken to determine why NCDs are not issued for the majority of PMAs. Any process improvements should be made without creating additional safety risks and adversely impacting public health. Finally, additional studies are needed to fully characterize and better understand the trends identified in this research study.^
Resumo:
The occurrence of waste pharmaceuticals has been identified and well documented in water sources throughout North America and Europe. Many studies have been conducted which identify the occurrence of various pharmaceutical compounds in these waters. This project is an extensive review of the documented evidence of this occurrence published in the scientific literature. This review was performed to determine if this occurrence has a significant impact on the environment and public health. This project and review found that pharmaceuticals such as sex hormone drugs, antibiotic drugs and antineoplastic/cytostatic agents as well as their metabolites have been found to occur in water sources throughout the United States at levels high enough to have noticeable impacts on human health and the environment. It was determined that the primary sources of this occurrence of pharmaceuticals were waste water effluent and solid wastes from sewage treatment plants, pharmaceutical manufacturing plants, healthcare and biomedical research facilities, as well as runoff from veterinary medicine applications (including aquaculture). ^ In addition, current public policies of US governmental agencies such as the Environmental Protection Agency (EPA), Food and Drug Administration (FDA), and Drug Enforcement Agency (DEA) have been evaluated to see if they are doing a sufficient job at controlling this issue. Specific recommendations for developing these EPA, FDA, and DEA policies have been made to mitigate, prevent, or eliminate this issue.^ Other possible interventions such as implementing engineering controls were also evaluated in order to mitigate, prevent and eliminate this issue. These engineering controls include implementing improved current treatment technologies such as the advancement and improvement of waste water treatment processes utilized by conventional sewage treatment and pharmaceutical manufacturing plants. In addition, administrative controls such as the use of “green chemistry” in drug synthesis and design were also explored and evaluated as possible alternatives to mitigate, prevent, or eliminate this issue. Specific recommendations for incorporating these engineering and administrative controls into the applicable EPA, FDA, and DEA policies have also been made.^
Resumo:
The natural history of placebo treated travelers' diarrhea and the prognostic factors of recovery from diarrhea were evaluated using 9 groups of placebo treated subjects from 9 clinical trial studies conducted since 1975, for use as a historical control in the future clinical trial of antidiarrheal agents. All of these studies were done by the same group of investigators in one site (Guadalajara, Mexico). The studies are similar in terms of population, measured parameters, microbiologic identification of enteropathogens and definitions of parameters. The studies had two different durations of followup. In some studies, subjects were followed for two days, and in some they were followed for five days.^ Using definitions established by the Infectious Diseases society of America and the Food and Drug Administration, the following efficacy parameters were evaluated: Time to last unformed stool (TLUS), number of unformed stools post-initiation of placebo treatment for five consecutive days of followup, microbiologic cure, and improvement of diarrhea. Among the groups that were followed for five days, the mean TLUS ranged from 59.1 to 83.5 hours. Fifty percent to 78% had diarrhea lasting more than 48 hours and 25% had diarrhea more than five days. The mean number of unformed stools passed on the first day post-initiation of therapy ranged from 3.6 to 5.8 and, for the fifth day ranged from 0.5 to 1.5. By the end of followup, diarrhea improved in 82.6% to 90% of the subjects. Subjects with enterotoxigenic E. coli had 21.6% to 90.0% microbiologic cure; and subjects with shigella species experienced 14.3% to 60.0% microbiologic cure.^ In evaluating the prognostic factors of recovery from diarrhea (primary efficacy parameter in evaluating the efficacy of antidiarrheal agents against travelers' diarrhea). The subjects from five studies were pooled and the Cox proportional hazard model was used to evaluate the predictors of prolonged diarrhea. After adjusting for design characteristics of each trial, fever with a rate ratio (RR) of 0.40, presence of invasive pathogens with a RR of 0.41, presence of severe abdominal pain and cramps with a RR of 0.50, number of watery stools more than five with a RR of 0.60, and presence of non-invasive pathogens with a RR of 0.84 predicted a longer duration of diarrhea. Severe vomiting with a RR of 2.53 predicted a shorter duration of diarrhea. The number of soft stools, presence of fecal leukocytes, presence of nausea, and duration of diarrhea before enrollment were not associated with duration of diarrhea. ^
Resumo:
The effect of time (i.e., biologic time structure) of drug administration on the bioavailability of theophylline was investigated in man after both a single dosage as well as after repeated, or chronic, drug administrations. Preliminary laboratory investigations on Balb-C mice showed the toxic
Resumo:
Viral hepatitis is a significant public health problem worldwide and is due to viral infections that are classified as Hepatitis A, B, C, D, and E. Hepatitis B is one of the five known hepatic viruses. A safe and effective vaccine for Hepatitis B was first developed in 1981, and became adopted into national immunization programs targeting infants since 1990 and adolescents since 1995. In the U.S., this vaccination schedule has led to an 82% reduction in incidence from 8.5 cases per 100,000 in 1990 to 1.5 cases per 100,000 in 2007. Although there has been a decline in infection among adolescents, there is still a large burden of hepatitis B infection among adults and minorities. There is very little research in regards to vaccination gaps among adults. Using the National Health and Nutrition Examination Survey (NHANES) question "{Have you/Has SP (Study Participant)} ever received the 3-dose series of the hepatitis B vaccine?" the existence of racial/ethnic gaps using a cross-sectional study design was explored. In this study, other variables such as age, gender, socioeconomic variables (federal poverty line, educational attainment), and behavioral factors (sexual practices, self-report of men having sex with men, and intravenous drug use) were examined. We found that the current vaccination programs and policies for Hepatitis B had eliminated racial and ethnic disparities in Hepatitis B vaccination, but that a low coverage exists particularly for adults who engage in high risk behaviors. This study found a statistically significant 10% gap in Hepatitis B vaccination between those who have and those who do not have access to health insurance.^
Resumo:
Background: Most studies have looked at breastfeeding practices from the point of view of the maternal behavior only, however in counseling women who choose to breastfeed it is important to be aware of general infant feeding patterns in order to adequately provide information about what to expect. Available literature on the differences in infant breastfeeding behavior by sex is minimal and therefore requires further investigation. Objectives: This study determined if at the age of 2 months there were differences in the amount of breast milk consumed, duration of breastfeeding, and infant satiety by infant sex. It also assessed whether infant sex is an independent predictor of initiation of breastfeeding. Methods: This is a secondary analysis of data obtained from the Infant Feeding Practices Survey II (IFPS II) which was a longitudinal study carried out from May 2005 through June 2007 by the Food and Drug Administration and the Centers for Disease Control and Prevention. The questionnaires asked about demography, prenatal care, mode of delivery, birth weight, infant sex, and breastfeeding patterns. A total of 3,033 and 2,552 mothers completed the neonatal and post-neonatal questionnaires respectively. ^ Results: There was no significant difference in the initiation of breastfeeding by infant sex. About 85% of the male infants initiated breastfeeding compared with 84% of female infants. The odds ratio of ever initiating breastfeeding by male infants was 0.93 but the difference was not significant with a p-value of 0.49. None of the other infant feeding patterns differed by infant gender. ^ Conclusion: This study found no evidence that male infants feed more or that their mothers are more likely to initiate breastfeeding. Each baby is an individual and therefore will have a unique feeding pattern. Based on these findings, the major determining factors for breastfeeding continue to be maternal factors therefore more effort should be invested in promoting breastfeeding among mothers of all ethnic groups and social classes.^