285 resultados para Metagenomic comparison of endemic viruses
Resumo:
There has been relatively little change over recent decades in the methods used in research on self-reported delinquency. Face-to-face interviews and selfadministered interviews in the classroom are still the predominant alternatives envisaged. New methods have been brought into the picture by recent computer technology, the Internet, and an increasing availability of computer equipment and Internet access in schools. In the autumn of 2004, a controlled experiment was conducted with 1,203 students in Lausanne (Switzerland), where "paper-and-pencil" questionnaires were compared with computer-assisted interviews through the Internet. The experiment included a test of two different definitions of the (same) reference period. After the introductory question ("Did you ever..."), students were asked how many times they had done it (or experienced it), if ever, "over the last 12 months" or "since the October 2003 vacation". Few significant differences were found between the results obtained by the two methods and for the two definitions of the reference period, in the answers concerning victimisation, self-reported delinquency, drug use, failure to respond (missing data). Students were found to be more motivated to respond through the Internet, take less time for filling out the questionnaire, and were apparently more confident of privacy, while the school principals were less reluctant to allow classes to be interviewed through the Internet. The Internet method also involves considerable cost reductions, which is a critical advantage if self-reported delinquency surveys are to become a routinely applied method of evaluation, particularly so in countries with limited resources. On balance, the Internet may be instrumental in making research on self-reported delinquency far more feasible in situations where limited resources so far have prevented its implementation.
Resumo:
BACKGROUND: The proportion of adults with positive varicella serology is lower in populations from tropical countries. Therefore immigrants to countries with a temperate climate are at risk of acquiring varicella infection during adulthood. METHODS: We tested two different strategies to prevent varicella outbreaks in housing facilities for asylum seekers arriving in the Canton of Vaud, Switzerland. The first strategy consisted of a rapid response with isolation of the affected individuals and vaccination of the susceptible contacts. The second strategy consisted of a general vaccination upon arrival of all asylum seekers aged 15-39 years with no history of chickenpox. RESULTS: From May 2008 to January 2009 we applied the rapid response strategy. Eight hundred and fifty-eight asylum seekers arrived in the Canton and an attack rate of 2.8% (seven cases among 248 exposed asylum seekers) was observed. The mean cost was US$ 31.35 per asylum seeker. The general vaccination strategy was applied from February 2009 to May 2010, a period during which 966 asylum seekers were registered. This second strategy completely prevented any outbreak at a mean cost of US$ 83.85 per asylum seeker. CONCLUSIONS: Of the two analyzed interventions to prevent varicella outbreaks in housing facilities for asylum seekers, the general vaccination strategy was more effective, more sustainable, and ethically preferable, although more costly.
Resumo:
BACKGROUND: Empirical antibacterial therapy in hospitals is usually guided by local epidemiologic features reflected by institutional cumulative antibiograms. We investigated additional information inferred by aggregating cumulative antibiograms by type of unit or according to the place of acquisition (i.e. community vs. hospital) of the bacteria. MATERIALS AND METHODS: Antimicrobial susceptibility rates of selected pathogens were collected over a 4-year period in an university-affiliated hospital. Hospital-wide antibiograms were compared with those selected by type of unit and sampling time (<48 or >48 h after hospital admission). RESULTS: Strains isolated >48 h after admission were less susceptible than those presumably arising from the community (<48 h). The comparison of units revealed significant differences among strains isolated >48 h after admission. When compared to hospital-wide antibiograms, susceptibility rates were lower in the ICU and surgical units for Escherichia coli to amoxicillin-clavulanate, enterococci to penicillin, and Pseudomonas aeruginosa to anti-pseudomonal beta-lactams, and in medical units for Staphylococcus aureus to oxacillin. In contrast, few differences were observed among strains isolated within 48 h of admission. CONCLUSIONS: Hospital-wide antibiograms reflect the susceptibility pattern for a specific unit with respect to community-acquired, but not to hospital-acquired strains. Antibiograms adjusted to these parameters may be useful in guiding the choice of empirical antibacterial therapy.
Resumo:
AIMS: This study was performed to compare the sensitivity of ultrasonography, computerized tomography during arterial portography, delayed computerized tomography, and magnetic resonance imaging to detect focal liver lesions. Forty three patients with primary or secondary malignant liver lesions were studied prior to surgical intervention. METHODS: The results of the imaging studies were compared with intraoperative examination of the liver, intraoperative ultrasonography and pathology results (29 patients). In the non-operated (14 patients) group, we compared the number of lesions detected by each technique. RESULTS: One hundred and forty six lesions were detected. There was 84% sensitivity with computerized tomography during arterial portography, 61.3% with delayed scan, 63.3% with magnetic resonance imaging and 51% with ultrasonography in operated patients. In patients who did not undergo surgery, magnetic resonance imaging was more sensitive in detecting lesions. CONCLUSIONS: In operated and non-operated patients series, CT during arterial portography had the highest sensitivity, but magnetic resonance imaging had the most consistent overall results.
Resumo:
This study assesses whether severity of physical partner aggression is associated with alcohol consumption at the time of the incident, and whether the relationship between drinking and aggression severity is the same for men and women and across different countries. National or large regional general population surveys were conducted in 13 countries as part of the GENACIS collaboration. Respondents described the most physically aggressive act done to them by a partner in the past 2 years, rated the severity of aggression on a scale of 1 to 10, and reported whether either partner had been drinking when the incident occurred. Severity ratings were significantly higher for incidents in which one or both partners had been drinking compared to incidents in which neither partner had been drinking. The relationship did not differ significantly for men and women or by country. We conclude that alcohol consumption may serve to potentiate violence when it occurs, and this pattern holds across a diverse set of cultures. Further research is needed that focuses explicitly on the nature of alcohol's contribution to intimate partner aggression. Prevention needs to address the possibility of enhanced dangers of intimate partner violence when the partners have been drinking and eliminate any systemic factors that permit alcohol to be used as an excuse. Clinical services for perpetrators and victims of partner violence need to address the role of drinking practices, including the dynamics and process of aggressive incidents that occur when one or both partners have been drinking.
Resumo:
The antihypertensive effect of indapamide (2.5 mg/day) was compared to that obtained with a placebo in a controlled trial carried out by 11 physicians in their private practice. Thirty-one patients with uncomplicated essential hypertension were included. After a run-in period of 3 weeks without any treatment, either indapamide (n = 16) or a placebo (n = 15) were administered for 8 weeks in double-blind fashion. Blood pressure decreased in both groups. In patients treated with indapamide, systolic pressure was significantly lower than in those given the placebo at 3 out of the 4 follow-up visits; diastolic pressure, however, was significantly lower only at the end of the trial. Both the active drug and the placebo were well tolerated. No significant change in body weight, plasma potassium and uric acid occurred during the study in either group of patients. It appears therefore that indapamide, at a dose which apparently has no major diuretic effect, may be useful for practitioners in managing patients with mild to moderate hypertension.
Resumo:
A three-dimensional cell culture system was used as a model to study the influence of low levels of mercury in the developing brain. Aggregating cell cultures of fetal rat telencephalon were treated for 10 days either during an early developmental period (i.e., between days 5 and 15 in vitro) or during a phase of advanced maturation (i.e., between days 25 and 35) with mercury. An inorganic (HgCl2) and an organic mercury compound (monomethylmercury chloride, MeHgCl) were examined. By monitoring changes in cell type-specific enzymes activities, the concentration-dependent toxicity of the compounds was determined. In immature cultures, a general cytotoxicity was observed at 10(-6) M for both mercury compounds. In these cultures, HgCl2 appeared somewhat more toxic than MeHgCl. However, no appreciable demethylation of MeHgCl could be detected, indicating similar toxic potencies for both mercury compounds. In highly differentiated cultures, by contrast, MeHgCl exhibited a higher toxic potency than HgCl2. In addition, at 10(-6) M, MeHgCl showed pronounced neuron-specific toxicity. Below the cytotoxic concentrations, distinct glia-specific reactions could be observed with both mercury compounds. An increase in the immunoreactivity for glial fibrillary acidic protein, typical for gliosis, could be observed at concentrations between 10(-9) M and 10(-7) M in immature cultures, and between 10(-8) M and 3 x 10(-5) M in highly differentiated cultures. A conspicuous increase in the number and clustering of GSI-B4 lectin-binding cells, indicating a microglial response, was found at concentrations between 10(-10) M and 10(-7) M. These development-dependent and cell type-specific effects may reflect the pathogenic potential of long-term exposure to subclinical doses of mercury.
Resumo:
BACKGROUND AND PURPOSE: Several prognostic scores have been developed to predict the risk of symptomatic intracranial hemorrhage (sICH) after ischemic stroke thrombolysis. We compared the performance of these scores in a multicenter cohort. METHODS: We merged prospectively collected data of patients with consecutive ischemic stroke who received intravenous thrombolysis in 7 stroke centers. We identified and evaluated 6 scores that can provide an estimate of the risk of sICH in hyperacute settings: MSS (Multicenter Stroke Survey); HAT (Hemorrhage After Thrombolysis); SEDAN (blood sugar, early infarct signs, [hyper]dense cerebral artery sign, age, NIH Stroke Scale); GRASPS (glucose at presentation, race [Asian], age, sex [male], systolic blood pressure at presentation, and severity of stroke at presentation [NIH Stroke Scale]); SITS (Safe Implementation of Thrombolysis in Stroke); and SPAN (stroke prognostication using age and NIH Stroke Scale)-100 positive index. We included only patients with available variables for all scores. We calculated the area under the receiver operating characteristic curve (AUC-ROC) and also performed logistic regression and the Hosmer-Lemeshow test. RESULTS: The final cohort comprised 3012 eligible patients, of whom 221 (7.3%) had sICH per National Institute of Neurological Disorders and Stroke, 141 (4.7%) per European Cooperative Acute Stroke Study II, and 86 (2.9%) per Safe Implementation of Thrombolysis in Stroke criteria. The performance of the scores assessed with AUC-ROC for predicting European Cooperative Acute Stroke Study II sICH was: MSS, 0.63 (95% confidence interval, 0.58-0.68); HAT, 0.65 (0.60-0.70); SEDAN, 0.70 (0.66-0.73); GRASPS, 0.67 (0.62-0.72); SITS, 0.64 (0.59-0.69); and SPAN-100 positive index, 0.56 (0.50-0.61). SEDAN had significantly higher AUC-ROC values compared with all other scores, except for GRASPS where the difference was nonsignificant. SPAN-100 performed significantly worse compared with other scores. The discriminative ranking of the scores was the same for the National Institute of Neurological Disorders and Stroke, and Safe Implementation of Thrombolysis in Stroke definitions, with SEDAN performing best, GRASPS second, and SPAN-100 worst. CONCLUSIONS: SPAN-100 had the worst predictive power, and SEDAN constantly the highest predictive power. However, none of the scores had better than moderate performance.
Resumo:
While 3D thin-slab coronary magnetic resonance angiography (MRA) has traditionally been performed using a Cartesian acquisition scheme, spiral k-space data acquisition offers several potential advantages. However, these strategies have not been directly compared in the same subjects using similar methodologies. Thus, in the present study a comparison was made between 3D coronary MRA using Cartesian segmented k-space gradient-echo and spiral k-space data acquisition schemes. In both approaches the same spatial resolution was used and data were acquired during free breathing using navigator gating and prospective slice tracking. Magnetization preparation (T(2) preparation and fat suppression) was applied to increase the contrast. For spiral imaging two different examinations were performed, using one or two spiral interleaves, during each R-R interval. Spiral acquisitions were found to be superior to the Cartesian scheme with respect to the signal-to-noise ratio (SNR) and contrast-to-noise-ratio (CNR) (both P < 0.001) and image quality. The single spiral per R-R interval acquisition had the same total scan duration as the Cartesian acquisition, but the single spiral had the best image quality and a 2.6-fold increase in SNR. The double-interleaf spiral approach showed a 50% reduction in scanning time, a 1.8-fold increase in SNR, and similar image quality when compared to the standard Cartesian approach. Spiral 3D coronary MRA appears to be preferable to the Cartesian scheme. The increase in SNR may be "traded" for either shorter scanning times using multiple consecutive spiral interleaves, or for enhanced spatial resolution.
Resumo:
Induction of apoptosis of virus-infected cells is an important host cell defence mechanism. However, some viruses have incorporated genes that encode anti-apoptotic proteins or modulate the expression of cellular regulators of apoptosis. Here, Edgar Meinl and colleagues discuss recent evidence that viral interference with host cell apoptosis leads to enhanced viral replication, and to evasion of cytotoxic T-cell effects.
Resumo:
OBJECTIVES: Reactivation of latent tuberculosis (TB) in inflammatory bowel disease (IBD) patients treated with antitumor necrosis factor-alpha medication is a serious problem. Currently, TB screening includes chest x-rays and a tuberculin skin test (TST). The interferon-gamma release assay (IGRA) QuantiFERON-TB Gold In-Tube (QFT-G-IT) shows better specificity for diagnosing TB than the skin test. This study evaluates the two test methods among IBD patients. METHODS: Both TST and IGRA were performed on 212 subjects (114 Crohn's disease, 44 ulcerative colitis, 10 indeterminate colitis, 44 controls). RESULTS: Eighty-one percent of IBD patients were under immunosuppressive therapy; 71% of all subjects were vaccinated with Bacille Calmette Guérin; 18% of IBD patients and 43% of controls tested positive with the skin test (P < 0.0001). Vaccinated controls tested positive more often with the skin test (52%) than did vaccinated IBD patients (23%) (P = 0.011). Significantly fewer immunosuppressed patients tested positive with the skin test than did patients not receiving therapy (P = 0.007); 8% of patients tested positive with the QFT-G-IT test (14/168) compared to 9% (4/44) of controls. Test agreement was significantly higher in the controls (P = 0.044) compared to the IBD group. CONCLUSIONS: Agreement between the two test methods is poor in IBD patients. In contrast to the QFT-G-IT test, the TST is negatively influenced by immunosuppressive medication and vaccination status, and should thus be replaced by the IGRA for TB screening in immunosuppressed patients having IBD.
Resumo:
Background: Cardiac magnetic resonance (CMR) is accepted as a method to assess suspected coronary artery disease (CAD). Nonetheless, invasive coronary angiography (CXA) combined or not with fractional flow reserve (FFR) remains the main diagnostic test to evaluate CAD. Little data exist on the economic impact of the use of these procedures in a population with a low to intermediate pre-test probability. Objective: To compare the costs of 3 decision strategies to revascularize a patient with suspected CAD: 1) strategy guided by CMR 2) hypothetical strategy guided by CXA-FFR, 3) hypothetical strategy guided by CXA alone.