218 resultados para non-respondents
em Université de Lausanne, Switzerland
Resumo:
BACKGROUND: Non-response is a major concern among substance use epidemiologists. When differences exist between respondents and non-respondents, survey estimates may be biased. Therefore, researchers have developed time-consuming strategies to convert non-respondents to respondents. The present study examines whether late respondents (converted former non-participants) differ from early respondents, non-consenters or silent refusers (consent givers but non-participants) in a cohort study, and whether non-response bias can be reduced by converting former non-respondents. METHODS: 6099 French- and 5720 German-speaking Swiss 20-year-old males (more than 94% of the source population) completed a short questionnaire on substance use outcomes and socio-demographics, independent of any further participation in a cohort study. Early respondents were those participating in the cohort study after standard recruitment procedures. Late respondents were non-respondents that were converted through individual encouraging telephone contact. Early respondents, non-consenters and silent refusers were compared to late respondents using logistic regressions. Relative non-response biases for early respondents only, for respondents only (early and late) and for consenters (respondents and silent refusers) were also computed. RESULTS: Late respondents showed generally higher patterns of substance use than did early respondents, but lower patterns than did non-consenters and silent refusers. Converting initial non-respondents to respondents reduced the non-response bias, which might be further reduced if silent refusers were converted to respondents. CONCLUSION: Efforts to convert refusers are effective in reducing non-response bias. However, converted late respondents cannot be seen as proxies of non-respondents, and are at best only indicative of existing response bias due to persistent non-respondents.
Resumo:
BACKGROUND: A major threat to the validity of longitudinal cohort studies is non-response to follow-up, which can lead to erroneous conclusions. The objective of this study was to evaluate the profile of non-responders to self-reported questionnaires in the Swiss inflammatory bowel disease (IBD) Cohort. METHODS: We used data from adult patients enrolled between November 2006 and June 2011. Responders versus non-responders were compared according to socio-demographic, clinical and psychosocial characteristics. Odds ratio for non-response to initial patient questionnaire (IPQ) compared to 1-year follow-up questionnaire (FPQ) were calculated. RESULTS: A total of 1943 patients received IPQ, in which 331 (17%) did not respond. Factors inversely associated with non-response to IPQ were age >50 and female gender (OR = 0.37; p < 0.001 respectively OR = 0.63; p = 0.003) among Crohn's disease (CD) patients, and disease duration >16 years (OR = 0.48; p = 0.025) among patients with ulcerative colitis (UC). FPQ was sent to 1586 patients who had completed the IPQ; 263 (17%) did not respond. Risk factors of non-response to FPQ were mild depression (OR = 2.17; p = 0.003) for CD, and mild anxiety (OR = 1.83; p = 0.024) for UC. Factors inversely associated with non-response to FPQ were: age >30 years, colonic only disease location, higher education and higher IBD-related quality of life for CD, and age >50 years or having a positive social support for UC. CONCLUSIONS: Characteristics of non-responders differed between UC and CD. The risk of non-response to repetitive solicitations (longitudinal versus transversal study) seemed to decrease with age. Assessing non-respondents' characteristics is important to document potential bias in longitudinal studies.
Resumo:
To assess the impact of international consensus conference guidelines on the attitude of Swiss specialists when facing the decision to treat chronic hepatitis C patients. Questionnaires focusing on the personal situation and treatment decisions were mailed to 165 patients who were newly diagnosed with hepatitis C virus (HCV) infection and enrolled into the Swiss Hepatitis C Cohort Study during the years 2002-2004. Survey respondents (n = 86, 52.1%) were comparable to non-respondents with respect to severity of liver disease, history of substance abuse and psychiatric co-morbidities. Seventy percent of survey respondents reported having been offered antiviral treatment. Patients deferred from treatment had less advanced liver fibrosis, were more frequently infected with HCV genotypes 1 or 4 and presented more often with a history of depression. There were no differences regarding age, socio-economic background, alcohol abuse, intravenous drug abuse or methadone treatment when compared with patients to whom treatment was proposed. Ninety percent of eligible patients agreed to undergo treatment. Overall, 54.6% of respondents and 78.3% of those considered eligible had actually received antiviral therapy by 2007. Ninety-five percent of patients reported high satisfaction with their own hepatitis C management. Consistent with latest international consensus guidelines, patients enrolled in the Swiss Hepatitis C Cohort with a history of substance abuse were not withheld antiviral treatment. A multidisciplinary approach is warranted to provide antiviral treatment to patients suffering from depression.
Resumo:
All social surveys suffer from different types of errors, of which one of the most studied is non-response bias. Non-response bias is a systematic error that occurs because individuals differ in their accessibility and propensity to participate in a survey according to their own characteristics as well as those from the survey itself. The extent of the problem heavily depends on the correlation between response mechanisms and key survey variables. However, non-response bias is difficult to measure or to correct for due to the lack of relevant data about the whole target population or sample. In this paper, non-response follow-up surveys are considered as a possible source of information about non-respondents. Non-response follow-ups, however, suffer from two methodological issues: they themselves operate through a response mechanism that can cause potential non-response bias, and they pose a problem of comparability of measure, mostly because the survey design differs between main survey and non-response follow-up. In order to detect possible bias, the survey variables included in non-response surveys have to be related to the mechanism of participation, but not be sensitive to measurement effects due to the different designs. Based on accumulated experience of four similar non-response follow-ups, we studied the survey variables that fulfill these conditions. We differentiated socio-demographic variables that are measurement-invariant but have a lower correlation with non-response and variables that measure attitudes, such as trust, social participation, or integration in the public sphere, which are more sensitive to measurement effects but potentially more appropriate to account for the non-response mechanism. Our results show that education level, work status, and living alone, as well as political interest, satisfaction with democracy, and trust in institutions are pertinent variables to include in non-response follow-ups of general social surveys. - See more at: https://ojs.ub.uni-konstanz.de/srm/article/view/6138#sthash.u87EeaNG.dpuf
Resumo:
OBJECTIVE: To describe food habits and dietary intakes of athletic and non-athletic adolescents in Switzerland. SETTING: College, high schools and professional centers in the Swiss canton of Vaud. METHOD: A total of 3,540 subjects aged 9-19 y answered a self-reported anonymous questionnaire to assess lifestyles, physical plus sports activity and food habits. Within this sample, a subgroup of 246 subjects aged 11-15 also participated in an in-depth ancillary study including a 3 day dietary record completed by an interview with a dietician. RESULTS: More boys than girls reported engaging in regular sports activities (P<0.001). Adolescent food habits are quite traditional: up to 15 y, most of the respondents have a breakfast and eat at least two hot meals a day, the percentages decreasing thereafter. Snacking is widespread among adolescents (60-80% in the morning, 80-90% in the afternoon). Food habits among athletic adolescents are healthier and also are perceived as such in a higher proportion. Among athletic adolescents, consumption frequency is higher for dairy products and ready to eat (RTE) cereals, for fruit, fruit juices and salad (P<0.05 at least). Thus the athletic adolescent's food brings more micronutrients than the diet of their non-athletic counterparts. Within the subgroup (ancillary study), mean energy intake corresponds to requirements for age/gender group. CONCLUSIONS: Athletic adolescents display healthier food habits than non-athletic adolescents: this result supports the idea that healthy behavior tends to cluster and suggests that prevention programs among this age group should target simultaneously both sports activity and food habits.
Resumo:
ETHNOPHARMACOLOGICAL RELEVANCE: The aim of this survey was to describe which traditional medicines (TM) are most commonly used for non-communicable diseases (NCD - diabetes, hypertension related to excess weight and obesity) in Pacific islands and with what perceived effectiveness. NCD, especially prevalent in the Pacific, have been subject to many public health interventions, often with rather disappointing results. Innovative interventions are required; one hypothesis is that some local, traditional approaches may have been overlooked. MATERIALS AND METHODS: The method used was a retrospective treatment-outcome study in a nation-wide representative sample of the adult population (about 15,000 individuals) of the Republic of Palau, an archipelago of Micronesia. RESULTS: From 188 respondents (61% female, age 16-87, median 48,), 30 different plants were used, mostly self-prepared (69%), or from a traditional healer (18%). For excess weight, when comparing the two most frequent plants, Morinda citrifolia L. was associated with more adequate outcome than Phaleria nishidae Kaneh. (P=0.05). In case of diabetes, when comparing Phaleria nishidae (=Phaleria nisidai) and Morinda citrifolia, the former was statistically more often associated with the reported outcome "lower blood sugar" (P=0.01). CONCLUSIONS: Statistical association between a plant used and reported outcome is not a proof of effectiveness or safety, but it can help select plants of interest for further studies, e.g. through a reverse pharmacology process, in search of local products which may have a positive impact on population health.
Resumo:
La douleur est fréquente en milieu de soins intensifs et sa gestion est l'une des missions des infirmières. Son évaluation est une prémisse indispensable à son soulagement. Cependant lorsque le patient est incapable de signaler sa douleur, les infirmières doivent se baser sur des signes externes pour l'évaluer. Les guides de bonne pratique recommandent chez les personnes non communicantes l'usage d'un instrument validé pour la population donnée et basé sur l'observation des comportements. A l'heure actuelle, les instruments d'évaluation de la douleur disponibles ne sont que partiellement adaptés aux personnes cérébrolésées dans la mesure où ces personnes présentent des comportements qui leur sont spécifiques. C'est pourquoi, cette étude vise à identifier, décrire et valider des indicateurs, et des descripteurs, de la douleur chez les personnes cérébrolésées. Un devis d'étude mixte multiphase avec une dominante quantitative a été choisi pour cette étude. Une première phase consistait à identifier des indicateurs et des descripteurs de la douleur chez les personnes cérébrolésées non communicantes aux soins intensifs en combinant trois sources de données : une revue intégrative des écrits, une démarche consultative utilisant la technique du groupe nominal auprès de 18 cliniciens expérimentés (6 médecins et 12 infirmières) et les résultats d'une étude pilote observationnelle réalisée auprès de 10 traumatisés crâniens. Les résultats ont permis d'identifier 6 indicateurs et 47 descripteurs comportementaux, vocaux et physiologiques susceptibles d'être inclus dans un instrument d'évaluation de la douleur destiné aux personnes cérébrolésées non- communicantes aux soins intensifs. Une deuxième phase séquentielle vérifiait les propriétés psychométriques des indicateurs et des descripteurs préalablement identifiés. La validation de contenu a été testée auprès de 10 experts cliniques et 4 experts scientifiques à l'aide d'un questionnaire structuré qui cherchait à évaluer la pertinence et la clarté/compréhensibilité de chaque descripteur. Cette démarche a permis de sélectionner 33 des 47 descripteurs et valider 6 indicateurs. Dans un deuxième temps, les propriétés psychométriques de ces indicateurs et descripteurs ont été étudiés au repos, lors de stimulation non nociceptive et lors d'une stimulation nociceptive (la latéralisation du patient) auprès de 116 personnes cérébrolésées aux soins intensifs hospitalisées dans deux centres hospitaliers universitaires. Les résultats montrent d'importantes variations dans les descripteurs observés lors de stimulation nociceptive probablement dues à l'hétérogénéité des patients au niveau de leur état de conscience. Dix descripteurs ont été éliminés, car leur fréquence lors de la stimulation nociceptive était inférieure à 5% ou leur fiabilité insuffisante. Les descripteurs physiologiques ont tous été supprimés en raison de leur faible variabilité et d'une fiabilité inter juge problématique. Les résultats montrent que la validité concomitante, c'est-à-dire la corrélation entre l'auto- évaluation du patient et les mesures réalisées avec les descripteurs, est satisfaisante lors de stimulation nociceptive {rs=0,527, p=0,003, n=30). Par contre la validité convergente, qui vérifiait l'association entre l'évaluation de la douleur par l'infirmière en charge du patient et les mesures réalisés avec les descripteurs, ainsi que la validité divergente, qui vérifiait si les indicateurs discriminent entre la stimulation nociceptive et le repos, mettent en évidence des résultats variables en fonction de l'état de conscience des patients. Ces résultats soulignent la nécessité d'étudier les descripteurs de la douleur chez des patients cérébrolésés en fonction du niveau de conscience et de considérer l'hétérogénéité de cette population dans la conception d'un instrument d'évaluation de la douleur pour les personnes cérébrolésées non communicantes aux soins intensifs. - Pain is frequent in the intensive care unit (ICU) and its management is a major issue for nurses. The assessment of pain is a prerequisite for appropriate pain management. However, pain assessment is difficult when patients are unable to communicate about their experience and nurses have to base their evaluation on external signs. Clinical practice guidelines highlight the need to use behavioral scales that have been validated for nonverbal patients. Current behavioral pain tools for ICU patients unable to communicate may not be appropriate for nonverbal brain-injured ICU patients, as they demonstrate specific responses to pain. This study aimed to identify, describe and validate pain indicators and descriptors in brain-injured ICU patients. A mixed multiphase method design with a quantitative dominant was chosen for this study. The first phase aimed to identify indicators and descriptors of pain for nonverbal brain- injured ICU patients using data from three sources: an integrative literature review, a consultation using the nominal group technique with 18 experienced clinicians (12 nurses and 6 physicians) and the results of an observational pilot study with 10 traumatic brain injured patients. The results of this first phase identified 6 indicators and 47 behavioral, vocal and physiological descriptors of pain that could be included in a pain assessment tool for this population. The sequential phase two tested the psychometric properties of the list of previously identified indicators and descriptors. Content validity was tested with 10 clinical and 4 scientific experts for pertinence and comprehensibility using a structured questionnaire. This process resulted in 33 descriptors to be selected out of 47 previously identified, and six validated indicators. Then, the psychometric properties of the descriptors and indicators were tested at rest, during non nociceptive stimulation and nociceptive stimulation (turning) in a sample of 116 brain-injured ICLI patients who were hospitalized in two university centers. Results showed important variations in the descriptors observed during the nociceptive stimulation, probably due to the heterogeneity of patients' level of consciousness. Ten descriptors were excluded, as they were observed less than 5% of the time or their reliability was insufficient. All physiologic descriptors were deleted as they showed little variability and inter observer reliability was lacking. Concomitant validity, testing the association between patients' self report of pain and measures performed using the descriptors, was acceptable during nociceptive stimulation (rs=0,527, p=0,003, n=30). However, convergent validity ( testing for an association between the nurses' pain assessment and measures done with descriptors) and divergent validity (testing for the ability of the indicators to discriminate between rest and a nociceptive stimulation) varied according to the level of consciousness These results highlight the need to study pain descriptors in brain-injured patients with different level of consciousness and to take into account the heterogeneity of this population forthe conception of a pain assessment tool for nonverbal brain-injured ICU patients.
Resumo:
BACKGROUND AND PURPOSE: Accurate placement of an external ventricular drain (EVD) for the treatment of hydrocephalus is of paramount importance for its functionality and in order to minimize morbidity and complications. The aim of this study was to compare two different drain insertion assistance tools with the traditional free-hand anatomical landmark method, and to measure efficacy, safety and precision. METHODS: Ten cadaver heads were prepared by opening large bone windows centered on Kocher's points on both sides. Nineteen physicians, divided in two groups (trainees and board certified neurosurgeons) performed EVD insertions. The target for the ventricular drain tip was the ipsilateral foramen of Monro. Each participant inserted the external ventricular catheter in three different ways: 1) free-hand by anatomical landmarks, 2) neuronavigation-assisted (NN), and 3) XperCT-guided (XCT). The number of ventricular hits and dangerous trajectories; time to proceed; radiation exposure of patients and physicians; distance of the catheter tip to target and size of deviations projected in the orthogonal plans were measured and compared. RESULTS: Insertion using XCT increased the probability of ventricular puncture from 69.2 to 90.2 % (p = 0.02). Non-assisted placements were significantly less precise (catheter tip to target distance 14.3 ± 7.4 mm versus 9.6 ± 7.2 mm, p = 0.0003). The insertion time to proceed increased from 3.04 ± 2.06 min. to 7.3 ± 3.6 min. (p < 0.001). The X-ray exposure for XCT was 32.23 mSv, but could be reduced to 13.9 mSv if patients were initially imaged in the hybrid-operating suite. No supplementary radiation exposure is needed for NN if patients are imaged according to a navigation protocol initially. CONCLUSION: This ex vivo study demonstrates a significantly improved accuracy and safety using either NN or XCT-assisted methods. Therefore, efforts should be undertaken to implement these new technologies into daily clinical practice. However, the accuracy versus urgency of an EVD placement has to be balanced, as the image-guided insertion technique will implicate a longer preparation time due to a specific image acquisition and trajectory planning.
Resumo:
Carcinoembryonic antigen (CEA), immunologically identical to CEA derived from colonic carcinoma, was identified and purified from perchloric acid (PCA) extracts of bronchial and mammary carcinoma. CEA extracted from bronchial and mammary carcinoma was quantitated by single radial immunodiffusion and was found to be in average about 50-75 times less abundant in these tumors than in colonic carcinoma. CEA could also be detected in one normal breast in lactation and at lower concentrations in normal lung (1000-4000 times lower than in colonic carcinoma). The small amounts of CEA present in normal tissues are distinct from the glycoprotein of small mol. wt showing only partial identity with CEA, that we recently identified and extracted in much larger quantities from normal lung and spleen. The demonstration of the presence of CEA in non digestive carcinoma by classical gel precipitation analysis suggests that the CEA detected in the plasma of such patients by radioimmunoassay is also identical to colonic carcinoma CEA. Our comparative study of plasma CEA from bronchial and colonic carcinoma, showing that CEA from both types of patient has the same elution pattern on Sephadex G-200 and gives parallel inhibition curves in the radioimmunoassay, is in favor of this hypothesis. However, it should not be concluded that all positive CEA radioimmunoassay indicate the presence of an antigen identical to colonic carcinoma CEA. A word of warning concerning the interpretation of radioimmunoassay is required by the observation that the addition of mg amounts of PCA extract of normal plasma, cleared of CEA by Sephadex filtration, could interfere in the test and mimic the presence of CEA.
Resumo:
Abnormalities in the topology of brain networks may be an important feature and etiological factor for psychogenic non-epileptic seizures (PNES). To explore this possibility, we applied a graph theoretical approach to functional networks based on resting state EEGs from 13 PNES patients and 13 age- and gender-matched controls. The networks were extracted from Laplacian-transformed time-series by a cross-correlation method. PNES patients showed close to normal local and global connectivity and small-world structure, estimated with clustering coefficient, modularity, global efficiency, and small-worldness (SW) metrics, respectively. Yet the number of PNES attacks per month correlated with a weakness of local connectedness and a skewed balance between local and global connectedness quantified with SW, all in EEG alpha band. In beta band, patients demonstrated above-normal resiliency, measured with assortativity coefficient, which also correlated with the frequency of PNES attacks. This interictal EEG phenotype may help improve differentiation between PNES and epilepsy. The results also suggest that local connectivity could be a target for therapeutic interventions in PNES. Selective modulation (strengthening) of local connectivity might improve the skewed balance between local and global connectivity and so prevent PNES events.
Resumo:
Studies evaluating the mechanical behavior of the trabecular microstructure play an important role in our understanding of pathologies such as osteoporosis, and in increasing our understanding of bone fracture and bone adaptation. Understanding of such behavior in bone is important for predicting and providing early treatment of fractures. The objective of this study is to present a numerical model for studying the initiation and accumulation of trabecular bone microdamage in both the pre- and post-yield regions. A sub-region of human vertebral trabecular bone was analyzed using a uniformly loaded anatomically accurate microstructural three-dimensional finite element model. The evolution of trabecular bone microdamage was governed using a non-linear, modulus reduction, perfect damage approach derived from a generalized plasticity stress-strain law. The model introduced in this paper establishes a history of microdamage evolution in both the pre- and post-yield regions
Resumo:
The trabecular bone score (TBS, Med-Imaps, Pessac, France) is an index of bone microarchitecture texture extracted from anteroposterior dual-energy X-ray absorptiometry images of the spine. Previous studies have documented the ability of TBS of the spine to differentiate between women with and without fractures among age- and areal bone mineral density (aBMD)-matched controls, as well as to predict future fractures. In this cross-sectional analysis of data collected from 3 geographically dispersed facilities in the United States, we investigated age-related changes in the microarchitecture of lumbar vertebrae as assessed by TBS in a cohort of non-Hispanic US white American women. All subjects were 30 yr of age and older and had an L1-L4aBMDZ-score within ±2 SD of the population mean. Individuals were excluded if they had fractures, were on any osteoporosis treatment, or had any illness that would be expected to impact bone metabolism. All data were extracted from Prodigy dual-energy X-ray absorptiometry devices (GE-Lunar, Madison, WI). Cross-calibrations between the 3 participating centers were performed for TBS and aBMD. aBMD and TBS were evaluated for spine L1-L4 but also for all other possible vertebral combinations. To validate the cohort, a comparison between the aBMD normative data of our cohort and US non-Hispanic white Lunar data provided by the manufacturer was performed. A database of 619 non-Hispanic US white women, ages 30-90 yr, was created. aBMD normative data obtained from this cohort were not statistically different from the non-Hispanic US white Lunar normative data provided by the manufacturer (p = 0.30). This outcome thereby indirectly validates our cohort. TBS values at L1-L4 were weakly inversely correlated with body mass index (r = -0.17) and weight (r = -0.16) and not correlated with height. TBS values for all lumbar vertebral combinations decreased significantly with age. There was a linear decrease of 16.0% (-2.47 T-score) in TBS at L1-L4 between 45 and 90 yr of age (vs. -2.34 for aBMD). Microarchitectural loss rate increased after age 65 by 50% (-0.004 to -0.006). Similar results were obtained for other combinations of lumbar vertebra. TBS, an index of bone microarchitectural texture, decreases with advancing age in non-Hispanic US white women. Little change in TBS is observed between ages 30 and 45. Thereafter, a progressive decrease is observed with advancing age. The changes we observed in these American women are similar to that previously reported for a French population of white women (r(2) > 0.99). This reference database will facilitate the use of TBS to assess bone microarchitectural deterioration in clinical practice.