949 resultados para drug surveillance program
Resumo:
Aims To discuss ethical issues that may arise in using WWA to monitor illicit drug use in the general population and in entertainment precincts, prisons, schools and work-places. Method Review current applications of WWA and identify ethical and social issues that may be raised with current and projected future uses of this method. Results Wastewater analysis (WWA) of drug residues is a promising method of monitoring illicit drug use that may overcome some limitations of other monitoring methods. When used for monitoring purposes in large populations, WWA does not raise major ethical concerns because individuals are not identified and the prospects of harming residents of catchment areas are remote. When WWA is used in smaller catchment areas (entertainment venues, prisons, schools or work-places) their results could, possibly, indirectly affect the occupants adversely. Researchers will need to take care in reporting their results to reduce media misreporting. Fears about possible use of WWA for mass individual surveillance by drug law enforcement officials are unlikely to be realized, but will need to be addressed because they may affect public support adversely for this type of research. Conclusions Using wastewater analysis to monitor illicit drug use in large populations does not raise major ethical concerns, but researchers need to minimize possible adverse consequences in studying smaller populations, such as workers, prisoners and students.
Resumo:
Because fish bioaccumulate* certain chemicals, levels of chemical contaminants in their edible portion must be closely monitored. In recent years, FDA has conducted several surveys designed to determine the occurrence and levels of selected chemicals or groups of chemicals in fish. Previous fish surveillance programs included the Mercury in Wholesale Fish Survey (FY 71), the FY 73 and 74 Comprehensive Fish Surveys, the Canned Tuna Program (FY 75), the Kepone and Mirex Contamination Program (FY 77), and the FY 77 Mercury in Swordfish Program. In addition, recent Compliance Programs for Pesticides and Metals in Foods and Pesticides, Metals, and Industrial Chemicals in Animal Feed have specified coverage of fish and fish products. Because of previous findings and the sustained high level of fish imported into the United States, a separate compliance program dealing solely with chemical contaminants in fish was initiated by the FDA Bureau of Foods in FY 78. The program includes all domestic and imported fish coverage except that directed by the Bureau of Veterinary Medicine for animal feed components derived from fishery products. The earlier surveys indicated that "bottom feeder" species such as catfish generally had the highest levels of pesticides and polychlorinated biphenyls (PCBs). For this reason, coverage at these species has been emphasized. Similarly, tuna has received special attention because it is the most prevalent fish in the U.S. diet and because of potential problems with mercury. Halibut, swordfish, and snapper also were emphasized in the sampling because of potential problems with mercury levels determined in previous years. The findings in this program were used in detecting emerging problems in fish and directing FDA efforts to deal with them. Care must be exercised in drawing conclusions about trends from the data because this Compliance Program was not statistically designed. Sampling objectives and sources may vary from year to year; thus the results are not directly comparable.
Resumo:
Gemstone Team Risky Business
Resumo:
We have studied 65 HIV-1-infected untreated patients recruited in Caracas, Venezuela with TCD4 counts > or =350/microl. The reverse transcriptase and protease sequences of the virus were sequenced, aligned with reference HIV-1 group M strains, and analyzed for drug resistance mutations. Most of the viruses were subtype B genotype in both the protease and RT genomic regions. Five of the 62 virus isolates successfully amplified showed evidence of recombination between protease and RT, with their protease region being non-B while their RT region was derived from subtype B. Four strains were found bearing resistance mutations either to NRTIs, NNRTIs, or PIs. The prevalence of HIV-1 isolates bearing resistance mutations was therefore above the 5% threshold of WHO.
Resumo:
Background: Great efforts have been made to increase accessibility of HIV antiretroviral therapy (ART) in low and middle-income countries. The threat of wide-scale emergence of drug resistance could severely hamper ART scale-up efforts. Population-based surveillance of transmitted HIV drug resistance ensures the use of appropriate first-line regimens to maximize efficacy of ART programs where drug options are limited. However, traditional HIV genotyping is extremely expensive, providing a cost barrier to wide-scale and frequent HIV drug resistance surveillance. Methods/Results: We have developed a low-cost laboratory-scale next-generation sequencing-based genotyping method to monitor drug resistance. We designed primers specifically to amplify protease and reverse transcriptase from Brazilian HIV subtypes and developed a multiplexing scheme using multiplex identifier tags to minimize cost while providing more robust data than traditional genotyping techniques. Using this approach, we characterized drug resistance from plasma in 81 HIV infected individuals collected in Sao Paulo, Brazil. We describe the complexities of analyzing next-generation sequencing data and present a simplified open-source workflow to analyze drug resistance data. From this data, we identified drug resistance mutations in 20% of treatment naive individuals in our cohort, which is similar to frequencies identified using traditional genotyping in Brazilian patient samples. Conclusion: The developed ultra-wide sequencing approach described here allows multiplexing of at least 48 patient samples per sequencing run, 4 times more than the current genotyping method. This method is also 4-fold more sensitive (5% minimal detection frequency vs. 20%) at a cost 3-5 x less than the traditional Sanger-based genotyping method. Lastly, by using a benchtop next-generation sequencer (Roche/454 GS Junior), this approach can be more easily implemented in low-resource settings. This data provides proof-of-concept that next-generation HIV drug resistance genotyping is a feasible and low-cost alternative to current genotyping methods and may be particularly beneficial for in-country surveillance of transmitted drug resistance.
Resumo:
BACKGROUND Overlapping first generation sirolimus- and paclitaxel-eluting stents are associated with persistent inflammation, fibrin deposition and delayed endothelialisation in preclinical models, and adverse angiographic and clinical outcomes--including death and myocardial infarction (MI)--in clinical studies. OBJECTIVES To establish as to whether there are any safety concerns with newer generation drug-eluting stents (DES). DESIGN Propensity score adjustment of baseline anatomical and clinical characteristics were used to compare clinical outcomes (Kaplan-Meier estimates) between patients implanted with overlapping DES (Resolute zotarolimus-eluting stent (R-ZES) or R-ZES/other DES) against no overlapping DES. Additionally, angiographic outcomes for overlapping R-ZES and everolimus-eluting stents were evaluated in the randomised RESOLUTE All-Comers Trial. SETTING Patient level data from five controlled studies of the RESOLUTE Global Clinical Program evaluating the R-ZES were pooled. Enrollment criteria were generally unrestrictive. PATIENTS 5130 patients. MAIN OUTCOME MEASURES 2-year clinical outcomes and 13-month angiographic outcomes. RESULTS 644 of 5130 patients (12.6%) in the RESOLUTE Global Clinical Program underwent overlapping DES implantation. Implantation of overlapping DES was associated with an increased frequency of MI and more complex/calcified lesion types at baseline. Adjusted in-hospital, 30-day and 2-year clinical outcomes indicated comparable cardiac death (2-year overlap vs non-overlap: 3.0% vs 2.1%, p=0.36), major adverse cardiac events (13.3% vs 10.7%, p=0.19), target-vessel MI (3.9% vs 3.4%, p=0.40), clinically driven target vessel revascularisation (7.7% vs 6.5%, p=0.32), and definite/probable stent thrombosis (1.4% vs 0.9%, p=0.28). 13-month adjusted angiographic outcomes were comparable between overlapping and non-overlapping DES. CONCLUSIONS Overlapping newer generation DES are safe and effective, with comparable angiographic and clinical outcomes--including repeat revascularisation--to non-overlapping DES.
Resumo:
An agency is accountable to a legislative body in the implementation of public policy. It has a responsibility to ensure that the implementation of that policy is consistent with its statutory objectives.^ The analysis of the effectiveness of implementation of the Vendor Drug Program proceeded in the following manner. The federal and state roles and statutes pursuant to the formulation of the Vendor Drug Program were reviewed to determine statutory intent and formal provisions. The translation of these into programmatic details was examined focusing on the factors impacting the implementation process. Lastly, the six conditions outlined by Mazmanian and Sabatier as criteria for effective implementation, were applied to the implementation of the Vendor Drug Program to determine if the implementation was effective in relation to consistency with statutory objectives.^ The implementation of the statutes clearly met four of the six conditions for effective implementation: (1) clear and consistent objectives; (2) a valid causal theory; (3) structured the process to maximize agency and target compliance with the objectives; and (4) had continued support of constituency groups and sovereigns.^ The implementation was basically consistent with the statutory objectives, although the determination of vendor reimbursement has had and continues to have problems. ^
Resumo:
Limited research has been conducted evaluating programs that are designed to improve the outcomes of homeless adults with mental disorders and comorbid alcohol, drug and mental disorders. This study conducted such an evaluation in a community-based day treatment setting with clients of the Harris County Mental Health and Mental Retardation Authority's Bristow Clinic. The study population included all clients who received treatment at the clinic for a minimum of six months between January 1, 1995 and August 31, 1996. An electronic database was used to identify clients and to track their program involvement. A profile was developed of the study participants and their level of program involvement included an examination of the amount of time spent in clinical, social and other interventions, the type of interventions encountered and the number of interventions encountered. Results were analyzed to determine whether social, demographic and mental history affected levels of program involvement and the effects of the levels of program involvement on housing status and psychiatric functioning status.^ A total of 101 clients met the inclusion criteria. Of the 101 clients, 96 had a mental disorder, and five had comorbidity. Due to the limited numbers of participants with comorbidity, only those with mental disorders were included in the analysis. The study found the Bristow Clinic population to be primarily single, Black, male, between the ages of 31 and 40 years, and with a gross family income of less than $4,000. There were more persons residing on the streets at entry and at six months following treatment than in any other residential setting. The most prevalent psychiatric diagnoses were depressive disorders and schizophrenia. The Global Assessment of Functioning (GAF) scale which was used to determine the degree of psychiatric functioning revealed a modal GAF score of 31--40 at entry and following six months in treatment. The study found that the majority of clients spent less than 17 hours in treatment, had less than 51 encounters and had clinical, social, and other encounters. In regard to social and demographic factors and levels of program involvement, there were statistically significant associations between gender and ethnicity and the types of interventions encountered as well as the number of interventions encountered. There was also a statistically significant difference between the amount of time spent in clinical interventions and gender. Relative to outcomes measured, the study found female gender to be the only background variable that was significantly associated with improved housing status and the female gender and previous MHMRA involvement to be statistically associated with improvement in GAF score. The total time in other (not clinical or social) interventions and the total number of encounters with other interventions were also significantly associated with improvement in housing outcome. The analysis of previous services and levels of program involvement revealed significant associations between time spent in social and clinical interventions and previous hospitalizations and previous MHMRA involvement.^ Major limitations of this study include the small sample size which may have resulted in very little power to detect differences and the lack of generalizability of findings due to site locations used in the study. Despite these limitations, the study makes an important contribution to the literature by documenting the levels of program involvement and the social and demographic factors necessary to produce outcomes of improved housing status and psychiatric functioning status. ^
Resumo:
In order to identify optimal therapy for children with bacterial pneumonia, Pakistan's ARI Program, in collaboration with the National Institute of Health (NIH), Islamabad, undertook a national surveillance of antimicrobial resistance in S. pneumoniae and H. influenzae. The project was carried out at selected urban and peripheral sites in 6 different regions of Pakistan, in 1991–92. Nasopharyngeal (NP) specimens and blood cultures were obtained from children with pneumonia diagnosed in the outpatient clinic of participating facilities. Organisms were isolated by local hospital laboratories and sent to NIH for confirmation, serotyping and antimicrobial susceptibility testing. Following were the aims of the study (i) to determine the antimicrobial resistance patterns of S. pneumoniae and H. influenzae in children aged 2–59 months; (ii) to determine the ability of selected laboratories to identify and effectively transport isolates of S. pneumoniae and H. influenzae cultured from nasopharyngeal and blood specimens; (iii) to validate the comparability of resistance patterns for nasopharyngeal and blood isolates of S. pneumoniae and H. influenzae from children with pneumonia; and (iv) to examine the effect of drug resistance and laboratory error on the cost of effectively treating children with ARI. ^ A total of 1293 children with ARI were included in the study: 969 (75%) from urban areas and 324 (25%) from rural parts of the country. Of 1293, there were 786 (61%) male and 507 (39%) female children. The resistance rate of S. pneumoniae to various antibiotics among the urban children with ARI was: TMP/SMX (62%); chloramphenicol (23%); penicillin (5%); tetracycline (16%); and ampicillin/amoxicillin (0%). The rates of resistance of H. influenzae were higher than S. pneumoniae: TMP/SMX (85%); chloramphenicol (62%); penicillin (59%); ampicillin/amoxicillin (46%); and tetracycline (100%). There were similar rates of resistance to each antimicrobial agent among isolates from the rural children. ^ Of a total 614 specimens that were tested for antimicrobial susceptibility, 432 (70.4%) were resistant to TMP/SMX and 93 (15.2%) were resistant to antimicrobial agents other than TMP/SMX viz. ampicillin/amoxicillin, chloramphenicol, penicillin, and tetracycline. ^ The sensitivity and positive predictive value of peripheral laboratories for H. influenzae were 99% and 65%, respectively. Similarly, the sensitivity and positive predictive value of peripheral laboratory tests compared to gold standard i.e. NIH laboratory, for S. pneumoniae were 99% and 54%, respectively. ^ The sensitivity and positive predictive value of nasopharyngeal specimens compared to blood cultures (gold standard), isolated by the peripheral laboratories, for H. influenzae were 88% and 11%, and for S. pneumoniae 92% and 39%, respectively. (Abstract shortened by UMI.)^
Resumo:
BACKGROUND An increased body mass index (BMI) is associated with a high risk of cardiovascular disease and reduction in life expectancy. However, several studies reported improved clinical outcomes in obese patients treated for cardiovascular diseases. The aim of the present study is to investigate the impact of BMI on long-term clinical outcomes after implantation of zotarolimus eluting stents. METHODS Individual patient data were pooled from the RESOLUTE Clinical Program comprising five trials worldwide. The study population was sorted according to BMI tertiles and clinical outcomes were evaluated at 2-year follow-up. RESULTS Data from a total of 5,127 patients receiving the R-ZES were included in the present study. BMI tertiles were as follow: I tertile (≤ 25.95 kg/m(2) -Low or normal weight) 1,727 patients; II tertile (>25.95 ≤ 29.74 kg/m(2) -overweight) 1,695 patients, and III tertile (>29.74 kg/m(2) -obese) 1,705 patients. At 2-years follow-up no difference was found for patients with high BMI (III tertile) compared with patients with normal or low BMI (I tertile) in terms of target lesion failure (I-III tertile, HR [95% CI] = 0.89 [0.69, 1.14], P = 0.341; major adverse cardiac events (I-III tertile, HR [95% CI] = 0.90 [0.72, 1.14], P = 0.389; cardiac death (I-III tertile, HR [95% CI] = 1.20 [0.73, 1.99], P = 0.476); myocardial infarction (I-III tertile, HR [95% CI] = 0.86 [0.55, 1.35], P = 0.509; clinically-driven target lesion revascularization (I-III tertile, HR [95% CI] = 0.75 [0.53, 1.08], P = 0.123; definite or probable stent thrombosis (I-III tertile, HR [95% CI] = 0.98 [0.49, 1.99], P = 0.964. CONCLUSIONS In the present study, the patients' body mass index was found to have no impact on long-term clinical outcomes after coronary artery interventions.
Resumo:
Background. EAP programs for airline pilots in companies with a well developed recovery management program are known to reduce pilot absenteeism following treatment. Given the costs and safety consequences to society, it is important to identify pilots who may be experiencing an AOD disorder to get them into treatment. ^ Hypotheses. This study investigated the predictive power of workplace absenteeism in identifying alcohol or drug disorders (AOD). The first hypothesis was that higher absenteeism in a 12-month period is associated with higher risk that an employee is experiencing AOD. The second hypothesis was that AOD treatment would reduce subsequent absence rates and the costs of replacing pilots on missed flights. ^ Methods. A case control design using eight years (time period) of monthly archival absence data (53,000 pay records) was conducted with a sample of (N = 76) employees having an AOD diagnosis (cases) matched 1:4 with (N = 304) non-diagnosed employees (controls) of the same profession and company (male commercial airline pilots). Cases and controls were matched on the variables age, rank and date of hire. Absence rate was defined as sick time hours used over the sum of the minimum guarantee pay hours annualized using the months the pilot worked for the year. Conditional logistic regression was used to determine if absence predicts employees experiencing an AOD disorder, starting 3 years prior to the cases receiving the AOD diagnosis. A repeated measures ANOVA, t tests and rate ratios (with 95% confidence intervals) were conducted to determine differences between cases and controls in absence usage for 3 years pre and 5 years post treatment. Mean replacement costs were calculated for sick leave usage 3 years pre and 5 years post treatment to estimate the cost of sick leave from the perspective of the company. ^ Results. Sick leave, as measured by absence rate, predicted the risk of being diagnosed with an AOD disorder (OR 1.10, 95% CI = 1.06, 1.15) during the 12 months prior to receiving the diagnosis. Mean absence rates for diagnosed employees increased over the three years before treatment, particularly in the year before treatment, whereas the controls’ did not (three years, x = 6.80 vs. 5.52; two years, x = 7.81 vs. 6.30, and one year, x = 11.00cases vs. 5.51controls. In the first year post treatment compared to the year prior to treatment, rate ratios indicated a significant (60%) post treatment reduction in absence rates (OR = 0.40, CI = 0.28, 0.57). Absence rates for cases remained lower than controls for the first three years after completion of treatment. Upon discharge from the FAA and company’s three year AOD monitoring program, case’s absence rates increased slightly during the fourth year (controls, x = 0.09, SD = 0.14, cases, x = 0.12, SD = 0.21). However, the following year, their mean absence rates were again below those of the controls (controls, x = 0.08, SD = 0.12, cases, x¯ = 0.06, SD = 0.07). Significant reductions in costs associated with replacing pilots calling in sick, were found to be 60% less, between the year of diagnosis for the cases and the first year after returning to work. A reduction in replacement costs continued over the next two years for the treated employees. ^ Conclusions. This research demonstrates the potential for workplace absences as an active organizational surveillance mechanism to assist managers and supervisors in identifying employees who may be experiencing or at risk of experiencing an alcohol/drug disorder. Currently, many workplaces use only performance problems and ignore the employee’s absence record. A referral to an EAP or alcohol/drug evaluation based on the employee’s absence/sick leave record as incorporated into company policy can provide another useful indicator that may also carry less stigma, thus reducing barriers to seeking help. This research also confirms two conclusions heretofore based only on cross-sectional studies: (1) higher absence rates are associated with employees experiencing an AOD disorder; (2) treatment is associated with lower costs for replacing absent pilots. Due to the uniqueness of the employee population studied (commercial airline pilots) and the organizational documentation of absence, the generalizability of this study to other professions and occupations should be considered limited. ^ Transition to Practice. The odds ratios for the relationship between absence rates and an AOD diagnosis are precise; the OR for year of diagnosis indicates the likelihood of being diagnosed increases 10% for every hour change in sick leave taken. In practice, however, a pilot uses approximately 20 hours of sick leave for one trip, because the replacement will have to be paid the guaranteed minimum of 20 hour. Thus, the rate based on hourly changes is precise but not practical. ^ To provide the organization with practical recommendations the yearly mean absence rates were used. A pilot flies on average, 90 hours a month, 1080 annually. Cases used almost twice the mean rate of sick time the year prior to diagnosis (T-1) compared to controls (cases, x = .11, controls, x = .06). Cases are expected to use on average 119 hours annually (total annual hours*mean annual absence rate), while controls will use 60 hours. The cases’ 60 hours could translate to 3 trips of 20 hours each. Management could use a standard of 80 hours or more of sick time claimed in a year as the threshold for unacceptable absence, a 25% increase over the controls (a cost to the company of approximately of $4000). At the 80-hour mark, the Chief Pilot would be able to call the pilot in for a routine check as to the nature of the pilot’s excessive absence. This management action would be based on a company standard, rather than a behavioral or performance issue. Using absence data in this fashion would make it an active surveillance mechanism. ^
Resumo:
"September, 1985."
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.