16 resultados para Similar tests
em Université de Lausanne, Switzerland
Resumo:
BACKGROUND: First hospitalisation for a psychotic episode causes intense distress to patients and families, but offers an opportunity to make a diagnosis and start treatment. However, linkage to outpatient psychiatric care remains a notoriously difficult step for young psychotic patients, who frequently interrupt treatment after hospitalisation. Persistence of symptoms, and untreated psychosis may therefore remain a problem despite hospitalisation and proper diagnosis. With persisting psychotic symptoms, numerous complications may arise: breakdown in relationships, loss of family and social support, loss of employment or study interruption, denial of disease, depression, suicide, substance abuse and violence. Understanding mechanisms that might promote linkage to outpatient psychiatric care is therefore a critical issue, especially in early intervention in psychotic disorders. OBJECTIVE: To study which factors hinder or promote linkage of young psychotic patients to outpatient psychiatric care after a first hospitalisation, in the absence of a vertically integrated program for early psychosis. Method. File audit study of all patients aged 18 to 30 who were admitted for the first time to the psychiatric University Hospital of Lausanne in the year 2000. For statistical analysis, chi2 tests were used for categorical variables and t-test for dimensional variables; p<0.05 was considered as statistically significant. RESULTS: 230 patients aged 18 to 30 were admitted to the Lausanne University psychiatric hospital for the first time during the year 2000, 52 of them with a diagnosis of psychosis (23%). Patients with psychosis were mostly male (83%) when compared with non-psychosis patients (49%). Furthermore, they had (1) 10 days longer mean duration of stay (24 vs 14 days), (2) a higher rate of compulsory admissions (53% vs 22%) and (3) were more often hospitalised by a psychiatrist rather than by a general practitioner (83% vs 53%). Other socio-demographic and clinical features at admission were similar in the two groups. Among the 52 psychotic patients, 10 did not stay in the catchment area for subsequent treatment. Among the 42 psychotic patients who remained in the catchment area after discharge, 20 (48%) did not attend the scheduled or rescheduled outpatient appointment. None of the socio demographic characteristics were associated with attendance to outpatient appointments. On the other hand, voluntary admission and suicidal ideation before admission were significantly related to attending the initial appointment. Moreover, some elements of treatment seemed to be associated with higher likelihood to attend outpatient treatment: (1) provision of information to the patient regarding diagnosis, (2) discussion about the treatment plan between in- and outpatient staff, (3) involvement of outpatient team during hospitalisation, and (4) elaboration of concrete strategies to face basic needs, organise daily activities or education and reach for help in case of need. CONCLUSION: As in other studies, half of the patients admitted for a first psychotic episode failed to link to outpatient psychiatric care. Our study suggests that treatment rather than patient's characteristics play a critical role in this phenomenon. Development of a partnership and involvement of patients in the decision process, provision of good information regarding the illness, clear definition of the treatment plan, development of concrete strategies to cope with the illness and its potential complications, and involvement of the outpatient treating team already during hospitalisation, all came out as critical strategies to facilitate adherence to outpatient care. While the current rate of disengagement after admission is highly concerning, our finding are encouraging since they constitute strategies that can easily be implemented. An open approach to psychosis, the development of partnership with patients and a better coordination between inpatient and outpatient teams should therefore be among the targets of early intervention programs. These observations might help setting up priorities when conceptualising new programs and facilitate the implementation of services that facilitate engagement of patients in treatment during the critical initial phase of psychotic disorders.
Resumo:
BACKGROUND: A possible strategy for increasing smoking cessation rates could be to provide smokers who have contact with healthcare systems with feedback on the biomedical or potential future effects of smoking, e.g. measurement of exhaled carbon monoxide (CO), lung function, or genetic susceptibility to lung cancer. OBJECTIVES: To determine the efficacy of biomedical risk assessment provided in addition to various levels of counselling, as a contributing aid to smoking cessation. SEARCH STRATEGY: We systematically searched the Cochrane Collaboration Tobacco Addiction Group Specialized Register, Cochrane Central Register of Controlled Trials 2008 Issue 4, MEDLINE (1966 to January 2009), and EMBASE (1980 to January 2009). We combined methodological terms with terms related to smoking cessation counselling and biomedical measurements. SELECTION CRITERIA: Inclusion criteria were: a randomized controlled trial design; subjects participating in smoking cessation interventions; interventions based on a biomedical test to increase motivation to quit; control groups receiving all other components of intervention; an outcome of smoking cessation rate at least six months after the start of the intervention. DATA COLLECTION AND ANALYSIS: Two assessors independently conducted data extraction on each paper, with disagreements resolved by consensus. Results were expressed as a relative risk (RR) for smoking cessation with 95% confidence intervals (CI). Where appropriate a pooled effect was estimated using a Mantel-Haenszel fixed effect method. MAIN RESULTS: We included eleven trials using a variety of biomedical tests. Two pairs of trials had sufficiently similar recruitment, setting and interventions to calculate a pooled effect; there was no evidence that CO measurement in primary care (RR 1.06, 95% CI 0.85 to 1.32) or spirometry in primary care (RR 1.18, 95% CI 0.77 to 1.81) increased cessation rates. We did not pool the other seven trials. One trial in primary care detected a significant benefit of lung age feedback after spirometry (RR 2.12; 95% CI 1.24 to 3.62). One trial that used ultrasonography of carotid and femoral arteries and photographs of plaques detected a benefit (RR 2.77; 95% CI 1.04 to 7.41) but enrolled a population of light smokers. Five trials failed to detect evidence of a significant effect. One of these tested CO feedback alone and CO + genetic susceptibility as two different intervention; none of the three possible comparisons detected significant effects. Three others used a combination of CO and spirometry feedback in different settings, and one tested for a genetic marker. AUTHORS' CONCLUSIONS: There is little evidence about the effects of most types of biomedical tests for risk assessment. Spirometry combined with an interpretation of the results in terms of 'lung age' had a significant effect in a single good quality trial. Mixed quality evidence does not support the hypothesis that other types of biomedical risk assessment increase smoking cessation in comparison to standard treatment. Only two pairs of studies were similar enough in term of recruitment, setting, and intervention to allow meta-analysis.
Resumo:
OBJECTIVE: To assess the impact of liver hypertrophy of the future liver remnant volume (FLR) induced by preoperative portal vein embolization (PVE) on the immediate postoperative complications after a standardized major liver resection. SUMMARY BACKGROUND DATA: PVE is usually indicated when FLR is estimated to be too small for major liver resection. However, few data exist regarding the exact quantification of sufficient minimal functional hepatic volume required to avoid postoperative complications in both patients with or without chronic liver disease. METHODS: All consecutive patients in whom an elective right hepatectomy was feasible and who fulfilled the inclusion and exclusion criteria between 1998 and 2000 were assigned to have alternatively either immediate surgery or surgery after PVE. Among 55 patients (25 liver metastases, 2 cholangiocarcinoma, and 28 hepatocellular carcinoma), 28 underwent right hepatectomy after PVE and 27 underwent immediate surgery. Twenty-eight patients had chronic liver disease. FLR and estimated rate of functional future liver remnant (%FFLR) volumes were assessed by computed tomography. RESULTS: The mean increase of FLR and %FFLR 4 to 8 weeks after PVE were respectively 44 +/- 19% and 16 +/- 7% for patients with normal liver and 35 +/- 28% and 9 +/- 3% for those with chronic liver disease. All patients with normal liver and 86% with chronic liver disease experienced hypertrophy after PVE. The postoperative course of patients with normal liver who underwent PVE before right hepatectomy was similar to those with immediate surgery. In contrast, PVE in patients with chronic liver disease significantly decreased the incidence of postoperative complications as well as the intensive care unit stay and total hospital stay after right hepatectomy. CONCLUSIONS: Before elective right hepatectomy, the hypertrophy of FLR induced by PVE had no beneficial effect on the postoperative course in patients with normal liver. In contrast, in patients with chronic liver disease, the hypertrophy of the FLR induced by PVE decreased significantly the rate of postoperative complications.
Resumo:
BACKGROUND: CD19 is a B cell lineage specific surface receptor whose broad expression, from pro-B cells to early plasma cells, makes it an attractive target for the immunotherapy of B cell malignancies. In this study we present the generation of a novel humanized anti-CD19 monoclonal antibody (mAb), GBR 401, and investigate its therapeutic potential on human B cell malignancies. METHODS: GBR 401 was partially defucosylated in order to enhance its cytotoxic function. We analyzed the in vitro depleting effects of GBR 401 against B cell lines and primary malignant B cells from patients in the presence or in absence of purified NK cells isolated from healthy donors. In vivo, the antibody dependent cellular cytotoxicity (ADCC) efficacy of GBR 401 was assessed in a B cell depletion model consisting of SCID mice injected with healthy human donor PBMC, and a malignant B cell depletion model where SCID mice are xenografted with both primary human B-CLL tumors and heterologous human NK cells. Furthermore, the anti-tumor activity of GBR 401 was also evaluated in a xenochimeric mouse model of human Burkitt lymphoma using mice xenografted intravenously with Raji cells. Pharmacological inhibition tests were used to characterize the mechanism of the cell death induced by GBR 401. RESULTS: GBR 401 exerts a potent in vitro and in vivo cytotoxic activity against primary samples from patients representing various B-cell malignancies. GBR 401 elicits a markedly higher level of ADCC on primary malignant B cells when compared to fucosylated similar mAb and to Rituximab, the current anti-CD20 mAb standard immunotherapeutic treatment for B cell malignancies, showing killing at 500 times lower concentrations. Of interest, GBR 401 also exhibits a potent direct killing effect in different malignant B cell lines that involves homotypic aggregation mediated by actin relocalization. CONCLUSION: These results contribute to consolidate clinical interest in developing GBR 401 for treatment of hematopoietic B cell malignancies, particularly for patients refractory to anti-CD20 mAb therapies.
Resumo:
Point-of-care (POC) tests offer potentially substantial benefits for the management of infectious diseases, mainly by shortening the time to result and by making the test available at the bedside or at remote care centres. Commercial POC tests are already widely available for the diagnosis of bacterial and viral infections and for parasitic diseases, including malaria. Infectious diseases specialists and clinical microbiologists should be aware of the indications and limitations of each rapid test, so that they can use them appropriately and correctly interpret their results. The clinical applications and performance of the most relevant and commonly used POC tests are reviewed. Some of these tests exhibit insufficient sensitivity, and should therefore be coupled to confirmatory tests when the results are negative (e.g. Streptococcus pyogenes rapid antigen detection test), whereas the results of others need to be confirmed when positive (e.g. malaria). New molecular-based tests exhibit better sensitivity and specificity than former immunochromatographic assays (e.g. Streptococcus agalactiae detection). In the coming years, further evolution of POC tests may lead to new diagnostic approaches, such as panel testing, targeting not just a single pathogen, but all possible agents suspected in a specific clinical setting. To reach this goal, the development of serology-based and/or molecular-based microarrays/multiplexed tests will be needed. The availability of modern technology and new microfluidic devices will provide clinical microbiologists with the opportunity to be back at the bedside, proposing a large variety of POC tests that will allow quicker diagnosis and improved patient care.
Resumo:
BACKGROUND: Data from prospective cohort studies regarding the association between subclinical hyperthyroidism and cardiovascular outcomes are conflicting.We aimed to assess the risks of total and coronary heart disease (CHD) mortality, CHD events, and atrial fibrillation (AF) associated with endogenous subclinical hyperthyroidism among all available large prospective cohorts. METHODS: Individual data on 52 674 participants were pooled from 10 cohorts. Coronary heart disease events were analyzed in 22 437 participants from 6 cohorts with available data, and incident AF was analyzed in 8711 participants from 5 cohorts. Euthyroidism was defined as thyrotropin level between 0.45 and 4.49 mIU/L and endogenous subclinical hyperthyroidism as thyrotropin level lower than 0.45 mIU/L with normal free thyroxine levels, after excluding those receiving thyroid-altering medications. RESULTS: Of 52 674 participants, 2188 (4.2%) had subclinical hyperthyroidism. During follow-up, 8527 participants died (including 1896 from CHD), 3653 of 22 437 had CHD events, and 785 of 8711 developed AF. In age- and sex-adjusted analyses, subclinical hyperthyroidism was associated with increased total mortality (hazard ratio[HR], 1.24, 95% CI, 1.06-1.46), CHD mortality (HR,1.29; 95% CI, 1.02-1.62), CHD events (HR, 1.21; 95%CI, 0.99-1.46), and AF (HR, 1.68; 95% CI, 1.16-2.43).Risks did not differ significantly by age, sex, or preexisting cardiovascular disease and were similar after further adjustment for cardiovascular risk factors, with attributable risk of 14.5% for total mortality to 41.5% forAF in those with subclinical hyperthyroidism. Risks for CHD mortality and AF (but not other outcomes) were higher for thyrotropin level lower than 0.10 mIU/L compared with thyrotropin level between 0.10 and 0.44 mIU/L(for both, P value for trend, .03). CONCLUSION: Endogenous subclinical hyperthyroidism is associated with increased risks of total, CHD mortality, and incident AF, with highest risks of CHD mortality and AF when thyrotropin level is lower than 0.10 mIU/L.
Resumo:
The aim of this study was to determine whether breath 13CO2 measurements could be used to assess the compliance to a diet containing carbohydrates naturally enriched in 13C. The study was divided into two periods: Period 1 (baseline of 4 days) with low 13C/12C ratio carbohydrates. Period 2 (5 days) isocaloric diet with a high 13C/12C ratio (corn, cane sugar, pineapple, millet) carbohydrates. Measurements were made of respiratory gas exchange by indirect calorimetry, urinary nitrogen excretion and breath 13CO2 every morning in post-absorptive conditions, both in resting state and during a 45-min low intensity exercise (walking on a treadmill). The subjects were 10 healthy lean women (BMI 20.4 +/- 1.7 kg/m2, % body fat 24.4 +/- 1.3%), the 13C enrichment of oxidized carbohydrate and breath 13CO2 were compared to the enrichment of exogenous dietary carbohydrates. At rest the enrichment of oxidized carbohydrate increased significantly after one day of 13C carbohydrate enriched diet and reached a steady value (103 +/- 16%) similar to the enrichment of exogenous carbohydrates. During exercise, the 13C enrichment of oxidized carbohydrate remained significantly lower (68 +/- 17%) than that of dietary carbohydrates. The compliance to a diet with a high content of carbohydrates naturally enriched in 13C may be assessed from the measurement of breath 13CO2 enrichment combined with respiratory gas exchange in resting, postabsorptive conditions.
Resumo:
SETTING: Ambulatory paediatric clinic in Lausanne, Switzerland, a country with a significant proportion of tuberculosis (TB) among immigrants. AIM: To assess the factors associated with positive tuberculin skin tests (TST) among children examined during a health check-up or during TB contact tracing, notably the influence of BCG vaccination (Bacille Calmette Guérin) and history of TB contact. METHOD: A descriptive study of children who had a TST (2 Units RT23) between November 2002 and April 2004. Age, sex, history of TB contact, BCG vaccination status, country of origin and birth outside Switzerland were recorded. RESULTS: Of 234 children, 176 (75%) had a reaction equal to zero and 31 (13%) tested positive (>10 mm). In a linear regression model, the size of the TST varied significantly according to the history of TB contact, age, TB incidence in the country of origin and BCG vaccination status but not according to sex or birth in or outside Switzerland. In a logistic regression model including all the recorded variables, age (Odds Ratio = 1.21, 95% CI 1.08; 1.35), a history of TB contact (OR = 7.31, 95% CI 2.23; 24) and the incidence of TB in the country of origin (OR = 1.01, 95% CI 1.00; 1.02) were significantly associated with a positive TST but sex (OR = 1.18, 95% CI 0.50; 2.78) and BCG vaccination status (OR = 2.97, 95% CI 0.91; 9.72) were not associated. CONCLUSIONS: TB incidence in the country of origin, BCG vaccination and age influence the TSTreaction (size or proportion of TST > or = 10 mm). However the most obvious risk factor for a positive TST is a history of contact with TB.
Resumo:
OBJECTIVE: Both subclinical hypothyroidism and the metabolic syndrome have been associated with increased risk of coronary heart disease events. It is unknown whether the prevalence and incidence of metabolic syndrome is higher as TSH levels increase, or in individuals with subclinical hypothyroidism. We sought to determine the association between thyroid function and the prevalence and incidence of the metabolic syndrome in a cohort of older adults. DESIGN: Data were analysed from the Health, Ageing and Body Composition Study, a prospective cohort of 3075 community-dwelling US adults. PARTICIPANTS: Two thousand one hundred and nineteen participants with measured TSH and data on metabolic syndrome components were included in the analysis. MEASUREMENTS: TSH was measured by immunoassay. Metabolic syndrome was defined per revised ATP III criteria. RESULTS: At baseline, 684 participants met criteria for metabolic syndrome. At 6-year follow-up, incident metabolic syndrome developed in 239 individuals. In fully adjusted models, each unit increase in TSH was associated with a 3% increase in the odds of prevalent metabolic syndrome (OR, 1.03; 95% CI, 1.01-1.06; P = 0.02), and the association was stronger for TSH within the normal range (OR, 1.16; 95% CI, 1.03-1.30; P = 0.02). Subclinical hypothyroidism with a TSH > 10 mIU/l was significantly associated with increased odds of prevalent metabolic syndrome (OR, 2.3; 95% CI, 1.0-5.0; P = 0.04); the odds of incident MetS was similar (OR 2.2), but the confidence interval was wide (0.6-7.5). CONCLUSIONS: Higher TSH levels and subclinical hypothyroidism with a TSH > 10 mIU/l are associated with increased odds of prevalent but not incident metabolic syndrome.
Resumo:
BACKGROUND: American College of Cardiology/American Heart Association guidelines for the diagnosis and management of heart failure recommend investigating exacerbating conditions such as thyroid dysfunction, but without specifying the impact of different thyroid-stimulation hormone (TSH) levels. Limited prospective data exist on the association between subclinical thyroid dysfunction and heart failure events. METHODS AND RESULTS: We performed a pooled analysis of individual participant data using all available prospective cohorts with thyroid function tests and subsequent follow-up of heart failure events. Individual data on 25 390 participants with 216 248 person-years of follow-up were supplied from 6 prospective cohorts in the United States and Europe. Euthyroidism was defined as TSH of 0.45 to 4.49 mIU/L, subclinical hypothyroidism as TSH of 4.5 to 19.9 mIU/L, and subclinical hyperthyroidism as TSH <0.45 mIU/L, the last two with normal free thyroxine levels. Among 25 390 participants, 2068 (8.1%) had subclinical hypothyroidism and 648 (2.6%) had subclinical hyperthyroidism. In age- and sex-adjusted analyses, risks of heart failure events were increased with both higher and lower TSH levels (P for quadratic pattern <0.01); the hazard ratio was 1.01 (95% confidence interval, 0.81-1.26) for TSH of 4.5 to 6.9 mIU/L, 1.65 (95% confidence interval, 0.84-3.23) for TSH of 7.0 to 9.9 mIU/L, 1.86 (95% confidence interval, 1.27-2.72) for TSH of 10.0 to 19.9 mIU/L (P for trend <0.01) and 1.31 (95% confidence interval, 0.88-1.95) for TSH of 0.10 to 0.44 mIU/L and 1.94 (95% confidence interval, 1.01-3.72) for TSH <0.10 mIU/L (P for trend=0.047). Risks remained similar after adjustment for cardiovascular risk factors. CONCLUSION: Risks of heart failure events were increased with both higher and lower TSH levels, particularly for TSH ≥10 and <0.10 mIU/L.
Resumo:
Sensing the chemical warnings present in the environment is essential for species survival. In mammals, this form of danger communication occurs via the release of natural predator scents that can involuntarily warn the prey or by the production of alarm pheromones by the stressed prey alerting its conspecifics. Although we previously identified the olfactory Grueneberg ganglion as the sensory organ through which mammalian alarm pheromones signal a threatening situation, the chemical nature of these cues remains elusive. We here identify, through chemical analysis in combination with a series of physiological and behavioral tests, the chemical structure of a mouse alarm pheromone. To successfully recognize the volatile cues that signal danger, we based our selection on their activation of the mouse olfactory Grueneberg ganglion and the concomitant display of innate fear reactions. Interestingly, we found that the chemical structure of the identified mouse alarm pheromone has similar features as the sulfur-containing volatiles that are released by predating carnivores. Our findings thus not only reveal a chemical Leitmotiv that underlies signaling of fear, but also point to a double role for the olfactory Grueneberg ganglion in intraspecies as well as interspecies communication of danger.
Resumo:
PURPOSE: To investigate the effect of intraocular straylight (IOS) induced by white opacity filters (WOF) on threshold measurements for stimuli employed in three perimeters: standard automated perimetry (SAP), pulsar perimetry (PP) and the Moorfields motion displacement test (MDT).¦METHODS: Four healthy young (24-28 years old) observers were tested six times with each perimeter, each time with one of five different WOFs and once without, inducing various levels of IOS (from 10% to 200%). An increase in IOS was measured with a straylight meter. The change in sensitivity from baseline was normalized, allowing comparison of standardized (z) scores (change divided by the SD of normative values) for each instrument.¦RESULTS: SAP and PP thresholds were significantly affected (P < 0.001) by moderate to large increases in IOS (50%-200%). The drop in motion displacement (MD) from baseline with WOF 5, was approximately 5 dB, in both SAP and PP which represents a clinically significant loss; in contrast the change in MD with MDT was on average 1 minute of arc, which is not likely to indicate a clinically significant loss.¦CONCLUSIONS: The Moorfields MDT is more robust to the effects of additional straylight in comparison with SAP or PP.
Resumo:
BACKGROUND: A possible strategy for increasing smoking cessation rates could be to provide smokers who have contact with healthcare systems with feedback on the biomedical or potential future effects of smoking, e.g. measurement of exhaled carbon monoxide (CO), lung function, or genetic susceptibility to lung cancer. We reviewed systematically data on smoking cessation rates from controlled trials that used biomedical risk assessment and feedback. OBJECTIVES: To determine the efficacy of biomedical risk assessment provided in addition to various levels of counselling, as a contributing aid to smoking cessation. SEARCH STRATEGY: We systematically searched he Cochrane Collaboration Tobacco Addiction Group Specialized Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (1966 to 2004), and EMBASE (1980 to 2004). We combined methodological terms with terms related to smoking cessation counselling and biomedical measurements. SELECTION CRITERIA: Inclusion criteria were: a randomized controlled trial design; subjects participating in smoking cessation interventions; interventions based on a biomedical test to increase motivation to quit; control groups receiving all other components of intervention; an outcome of smoking cessation rate at least six months after the start of the intervention. DATA COLLECTION AND ANALYSIS: Two assessors independently conducted data extraction on each paper, with disagreements resolved by consensus. MAIN RESULTS: From 4049 retrieved references, we selected 170 for full text assessment. We retained eight trials for data extraction and analysis. One of the eight used CO alone and CO + Genetic Susceptibility as two different intervention groups, giving rise to three possible comparisons. Three of the trials isolated the effect of exhaled CO on smoking cessation rates resulting in the following odds ratios (ORs) and 95% confidence intervals (95% CI): 0.73 (0.38 to 1.39), 0.93 (0.62 to 1.41), and 1.18 (0.84 to 1.64). Combining CO measurement with genetic susceptibility gave an OR of 0.58 (0.29 to 1.19). Exhaled CO measurement and spirometry were used together in three trials, resulting in the following ORs (95% CI): 0.6 (0.25 to 1.46), 2.45 (0.73 to 8.25), and 3.50 (0.88 to 13.92). Spirometry results alone were used in one other trial with an OR of 1.21 (0.60 to 2.42).Two trials used other motivational feedback measures, with an OR of 0.80 (0.39 to 1.65) for genetic susceptibility to lung cancer alone, and 3.15 (1.06 to 9.31) for ultrasonography of carotid and femoral arteries performed in light smokers (average 10 to 12 cigarettes a day). AUTHORS' CONCLUSIONS: Due to the scarcity of evidence of sufficient quality, we can make no definitive statements about the effectiveness of biomedical risk assessment as an aid for smoking cessation. Current evidence of lower quality does not however support the hypothesis that biomedical risk assessment increases smoking cessation in comparison with standard treatment. Only two studies were similar enough in term of recruitment, setting, and intervention to allow pooling of data and meta-analysis.
Resumo:
Des nombreuses études ont montré une augmentation des scores aux tests d'aptitudes à travers les générations (« effet Flynn »). Différentes hypothèses d'ordre biologique, social et/ou éducationnels ont été élaborées afin d'expliquer ce phénomène. L'objectif de cette recherche est d'examiner l'évolution des performances aux tests d'aptitudes sur la base d'étalonnages datant de 1991 et de 2002. Les résultats suggèrent une inversion non homogène de l'effet Flynn. La diminution concerne plus particulièrement les tests d'aptitudes scolaires, comme ceux évaluant le facteur verbal et numérique. Cette étude pourrait refléter un changement de l'importance accordée aux différentes aptitudes peu évaluées en orientation scolaire et professionnelle.
Resumo:
Despite a low positive predictive value, diagnostic tests such as complete blood count (CBC) and C-reactive protein (CRP) are commonly used to evaluate whether infants with risk factors for early-onset neonatal sepsis (EOS) should be treated with antibiotics. We investigated the impact of implementing a protocol aiming at reducing the number of diagnostic tests in infants with risk factors for EOS in order to compare the diagnostic performance of repeated clinical examination with CBC and CRP measurement. The primary outcome was the time between birth and the first dose of antibiotics in infants treated for suspected EOS. Among the 11,503 infants born at ≥35 weeks during the study period, 222 were treated with antibiotics for suspected EOS. The proportion of infants receiving antibiotics for suspected EOS was 2.1% and 1.7% before and after the change of protocol (p = 0.09). Reduction of diagnostic tests was associated with earlier antibiotic treatment in infants treated for suspected EOS (hazard ratio 1.58; 95% confidence interval [CI] 1.20-2.07; p <0.001), and in infants with neonatal infection (hazard ratio 2.20; 95% CI 1.19-4.06; p = 0.01). There was no difference in the duration of hospital stay nor in the proportion of infants requiring respiratory or cardiovascular support before and after the change of protocol. Reduction of diagnostic tests such as CBC and CRP does not delay initiation of antibiotic treatment in infants with suspected EOS. The importance of clinical examination in infants with risk factors for EOS should be emphasised.