46 resultados para Plans of study

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM: Establish a list of first year medical students' attitudes, doubts, and knowledge in the fields of organ transplantation and donation. METHOD: Anonymized questionnaire handed out to students during class lectures. RESULTS: 183 questionnaires were distributed and 117 returned (participation: 64%). The average age of the students was 21.6 +/- 2.7 years (range 18 to 38 years); the sample included 71 women (60.7%) and 48 men (39.3%). Only 2 students (2%) were not interested in the subject of organ donation. The students knew very little of the legal aspects of organ donation and 1/4 of them thought there was even a Federal law regarding organ transplantation. When asked if they knew whether a law existed in the Canton of Berne, 44% replied yes, but only 24 (20%) knew that this is contradictory. There was no gender difference in the answers to these question. From 57 students (48%) 246 individual comments on doubts and concerns were analyzed. In this respect, the students mainly questioned whether the donor was truly dead when donation took place (n = 48), if illegal transplantation could be eliminated (n = 44) and if transplantation was truly necessary (n = 43). Some also mentioned religious/ethical doubts (n = 42). In regard to organ donation by a living individual, 27 students were concerned about the health of this donor. 20 students had doubts regarding the pressure possibly applied by family members and friends and as many voiced doubts in regard to premature diagnosis of brain death of potential donors. Only 2 students were concerned about the post-mortem presentation. 45 students (48%) indicated discomfort with the donation of certain organs. They ranked the kidney as the first organ to donate, followed by the pancreas, heart, cornea, intestine, lung and liver. CONCLUSION: The interest in organ donation and transplantation is already strong in fist year medical students in the pre-clinical stage. However, differences from lay public are not readably detectable at this stage of medical training. Adequate information could influence future physicians in their mediatory role.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention. METHODOLOGY/PRINCIPAL FINDINGS: We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. CONCLUSIONS: Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Empirical research has illustrated an association between study size and relative treatment effects, but conclusions have been inconsistent about the association of study size with the risk of bias items. Small studies give generally imprecisely estimated treatment effects, and study variance can serve as a surrogate for study size. METHODS We conducted a network meta-epidemiological study analyzing 32 networks including 613 randomized controlled trials, and used Bayesian network meta-analysis and meta-regression models to evaluate the impact of trial characteristics and study variance on the results of network meta-analysis. We examined changes in relative effects and between-studies variation in network meta-regression models as a function of the variance of the observed effect size and indicators for the adequacy of each risk of bias item. Adjustment was performed both within and across networks, allowing for between-networks variability. RESULTS Imprecise studies with large variances tended to exaggerate the effects of the active or new intervention in the majority of networks, with a ratio of odds ratios of 1.83 (95% CI: 1.09,3.32). Inappropriate or unclear conduct of random sequence generation and allocation concealment, as well as lack of blinding of patients and outcome assessors, did not materially impact on the summary results. Imprecise studies also appeared to be more prone to inadequate conduct. CONCLUSIONS Compared to more precise studies, studies with large variance may give substantially different answers that alter the results of network meta-analyses for dichotomous outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Current reporting guidelines do not call for standardised declaration of follow-up completeness, although study validity depends on the representativeness of measured outcomes. The Follow-Up Index (FUI) describes follow-up completeness at a given study end date as ratio between the investigated and the potential follow-up period. The association between FUI and the accuracy of survival-estimates was investigated. METHODS FUI and Kaplan-Meier estimates were calculated twice for 1207 consecutive patients undergoing aortic repair during an 11-year period: in a scenario A the population's clinical routine follow-up data (available from a prospective registry) was analysed conventionally. For the control scenario B, an independent survey was completed at the predefined study end. To determine the relation between FUI and the accuracy of study findings, discrepancies between scenarios regarding FUI, follow-up duration and cumulative survival-estimates were evaluated using multivariate analyses. RESULTS Scenario A noted 89 deaths (7.4%) during a mean considered follow-up of 30±28months. Scenario B, although analysing the same study period, detected 304 deaths (25.2%, P<0.001) as it scrutinized the complete follow-up period (49±32months). FUI (0.57±0.35 versus 1.00±0, P<0.001) and cumulative survival estimates (78.7% versus 50.7%, P<0.001) differed significantly between scenarios, suggesting that incomplete follow-up information led to underestimation of mortality. Degree of follow-up completeness (i.e. FUI-quartiles and FUI-intervals) correlated directly with accuracy of study findings: underestimation of long-term mortality increased almost linearly by 30% with every 0.1 drop in FUI (adjusted HR 1.30; 95%-CI 1.24;1.36, P<0.001). CONCLUSION Follow-up completeness is a pre-requisite for reliable outcome assessment and should be declared systematically. FUI represents a simple measure suited as reporting standard. Evidence lacking such information must be challenged as potentially flawed by selection bias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose To compare changes in the largest cross-sectional area (CSA) of the median nerve in wrists undergoing surgical decompression with changes in wrists undergoing non-surgical treatment of carpal tunnel syndrome (CTS). Methods This study was a prospective cohort study in 55 consecutive patients with 78 wrists with established CTS, including 60 wrists treated with surgical decompression and 18 wrists with non-surgical treatment. A sonographic examination was scheduled before and 4 months after initiation of treatment. We compared changes in CSA of the median nerve between wrists with surgical treatment and wrists with non-surgical treatment using linear regression models. Results Decreases in CSA of the median nerve were more pronounced in wrists with CTS release than in wrists undergoing nonsurgical treatment (difference in means, 1.0 mm2; 95% confidence interval, 0.3–1.8 mm2). Results were robust to the adjustment for age, gender, and neurological severity at baseline. Among wrists with CTS release, those with postoperative CSA of 10 mm2 or less tended to have better clinical outcomes than those with postoperative CSA of greater than 10 mm2 (p=.055). Postoperative sonographic workup in the 3 patients with unfavorable outcome or recurrence identified likely causes for treatment failure in 2 patients. Conclusions In this observational study, surgical decompression was associated with a greater decrease in median nerve CSA than was nonsurgical treatment. Smaller postoperative CSAs may be associated with better clinical outcomes. Additional randomized trials are necessary to determine the optimal treatment strategy in different subgroups of patients with CTS. Type of study/level of evidence Therapeutic III.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Sedation prior to performance of diagnostic esophagogastroduodenoscopy (EGDE) is widespread and increases patient comfort. But 98% of all serious adverse events during EGDEs are ascribed to sedation. The S3 guideline for sedation procedures in gastrointestinal endoscopy published in 2008 in Germany increases patient safety by standardization. These new regulations increase costs because of the need for more personnel and a prolonged discharge procedure after examinations with sedation. Many patients have difficulties to meet the discharge criteria regulated by the S3 guideline, e.g. the call for a second person to escort them home, to resign from driving and working for the rest of the day, resulting in a refusal of sedation. Therefore, we would like to examine if an acupuncture during elective, diagnostic EGDEs could increase the comfort of patients refusing systemic sedation. Methods/Design A single-center, double blinded, placebo controlled superiority trial to compare the success rates of elective, diagnostic EGDEs with real and placebo acupuncture. All patients aged 18 years or older scheduled for elective, diagnostic EGDE who refuse a systemic sedation are eligible. 354 patients will be randomized. The primary endpoint is the rate of successful EGDEs with the randomized technique. Intervention: Real or placebo acupuncture before and during EGDE. Duration of study: Approximately 24 months. Discussion Organisation/Responsibility The ACUPEND - Trial will be conducted in accordance with the protocol and in compliance with the moral, ethical, and scientific principles governing clinical research as set out in the Declaration of Helsinki (1989) and Good Clinical Practice (GCP). The Interdisciplinary Endoscopy Center (IEZ) of the University Hospital Heidelberg is responsible for design and conduct of the trial, including randomization and documentation of patients' data. Data management and statistical analysis will be performed by the independent Institute for Medical Biometry and Informatics (IMBI) and the Center of Clinical Trials (KSC) at the Department of General, Visceral and Transplantation Surgery, University of Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Opportunistic screening for genital chlamydia infection is being introduced in England, but evidence for the effectiveness of this approach is lacking. There are insufficient data about young peoples' use of primary care services to determine the potential coverage of opportunistic screening in comparison with a systematic population-based approach. AIM: To estimate use of primary care services by young men and women; to compare potential coverage of opportunistic chlamydia screening with a systematic postal approach. DESIGN OF STUDY: Population based cross-sectional study. SETTING: Twenty-seven general practices around Bristol and Birmingham. METHOD: A random sample of patients aged 16-24 years were posted a chlamydia screening pack. We collected details of face-to-face consultations from general practice records. Survival and person-time methods were used to estimate the cumulative probability of attending general practice in 1 year and the coverage achieved by opportunistic and systematic postal chlamydia screening. RESULTS: Of 12 973 eligible patients, an estimated 60.4% (95% confidence interval [CI] = 58.3 to 62.5%) of men and 75.3% (73.7 to 76.9%) of women aged 16-24 years attended their practice at least once in a 1-year period. During this period, an estimated 21.3% of patients would not attend their general practice but would be reached by postal screening, 9.2% would not receive a postal invitation but would attend their practice, and 11.8% would be missed by both methods. CONCLUSIONS: Opportunistic and population-based approaches to chlamydia screening would both fail to contact a substantial minority of the target group, if used alone. A pragmatic approach combining both strategies might achieve higher coverage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Unconscious perception is commonly described as a phenomenon that is not under intentional control and relies on automatic processes. We challenge this view by arguing that some automatic processes may indeed be under intentional control, which is implemented in task-sets that define how the task is to be performed. In consequence, those prime attributes that are relevant to the task will be most effective. To investigate this hypothesis, we used a paradigm which has been shown to yield reliable short-lived priming in tasks based on semantic classification of words. This type of study uses fast, well practised classification responses, whereby responses to targets are much less accurate if prime and target belong to a different category than if they belong to the same category. In three experiments, we investigated whether the intention to classify the same words with respect to different semantic categories had a differential effect on priming. The results suggest that this was indeed the case: Priming varied with the task in all experiments. However, although participants reported not seeing the primes, they were able to classify the primes better than chance using the classification task they had used before with the targets. When a lexical task was used for discrimination in experiment 4, masked primes could however not be discriminated. Also, priming was as pronounced when the primes were visible as when they were invisible. The pattern of results suggests that participants had intentional control on prime processing, even if they reported not seeing the primes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To test the feasibility of and interactions among three software-driven critical care protocols. DESIGN: Prospective cohort study. SETTING: Intensive care units in six European and American university hospitals. PATIENTS: 174 cardiac surgery and 41 septic patients. INTERVENTIONS: Application of software-driven protocols for cardiovascular management, sedation, and weaning during the first 7 days of intensive care. MEASUREMENTS AND RESULTS: All protocols were used simultaneously in 85% of the cardiac surgery and 44% of the septic patients, and any one of the protocols was used for 73 and 44% of study duration, respectively. Protocol use was discontinued in 12% of patients by the treating clinician and in 6% for technical/administrative reasons. The number of protocol steps per unit of time was similar in the two diagnostic groups (n.s. for all protocols). Initial hemodynamic stability (a protocol target) was achieved in 26+/-18 min (mean+/-SD) in cardiac surgery and in 24+/-18 min in septic patients. Sedation targets were reached in 2.4+/-0.2h in cardiac surgery and in 3.6 +/-0.2h in septic patients. Weaning protocol was started in 164 (94%; 154 extubated) cardiac surgery and in 25 (60%; 9 extubated) septic patients. The median (interquartile range) time from starting weaning to extubation (a protocol target) was 89 min (range 44-154 min) for the cardiac surgery patients and 96 min (range 56-205 min) for the septic patients. CONCLUSIONS: Multiple software-driven treatment protocols can be simultaneously applied with high acceptance and rapid achievement of primary treatment goals. Time to reach these primary goals may provide a performance indicator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES The aim of this prospective, randomized, controlled clinical study was to compare the clinical outcomes of the subgingival treatment with erythritol powder by means of an air-polishing (EPAP) device and of scaling and root planing (SRP) during supportive periodontal therapy (SPT). METHOD AND MATERIALS 40 patients enrolled in SPT were randomly assigned to two groups of equal size. Sites had to show signs of inflammation (bleeding on probing [BOP]-positive) and a probing pocket depth (PPD) of ≥ 4 mm, however, without presence of detectable subgingival calculus. During SPT, these sites were treated with EPAP or SRP, respectively. Full mouth and site-specific plaque indices, BOP, PPD, and clinical attachment level (CAL) were recorded at baseline (BL) and at 3 months, whereas the percentage of study sites positive for BOP (BOP+) was considered as primary outcome variable. Additionally, patient comfort using a visual analog scale (VAS) and the time needed to treat per site was evaluated. RESULTS At 3 months, mean BOP level measured 45.1% at test sites and 50.6% at control sites, respectively, without a statistically significant difference between the groups (P > .05). PPD and CAL slightly improved for both groups with comparable mean values at 3 months. Evaluation of patient tolerance showed statistically significantly better values among patients receiving the test treatment (mean VAS [0-10], 1.51) compared to SRP (mean VAS [0-10], 3.66; P = .0012). The treatment of test sites was set to 5 seconds per site. The treatment of control sites, on the other hand, lasted 85 seconds on average. CONCLUSION The new erythritol powder applied with an air-polishing device can be considered a promising modality for repeated instrumentation of residual pockets during SPT. CLINICAL RELEVANCE With regard to clinical outcomes during SPT, similar results can be expected irrespective of the two treatment approaches of hand instrumentation or subgingival application of erythritol powder with an air-polishing device in sites where only biofilm removal is required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last years two studies for the investigation of the etiology of porcine ear necrosis were carried out at the Clinic for Swine of the University of Veterinary Medicine Vienna. In study 1, parameters, which are discussed in this context, were collected by veterinary practitioners by completing specially designed questionnaires in farms with symptoms of the porcine ear necrosis syndrome. In study 2, samples of piglets and feed were collected for laboratory analysis of the most important infectious agents as well as mycotoxins. In the present manuscript, the results of both projects were compared. Even if the selection criteria of both studies differed, the affected age class was comparable (5.5 to ten weeks of life in study 1 and six to ten weeks of life in study 2). The herd-specific prevalence of the porcine ear necrosis syndrome varied considerably with percentages between 2 and 10, respectively, to 100%. The evaluation of questionnaires in study 1 showed that 51% of the farms had problems with cannibalism. Particles of plant material, which were frequently seen on the histologic slides of study 2, could have got into the tissue by chewing the ears of the pen mates or cannibalism. Whereas in study 1 the negative effect of parameters as high pig density, suboptimal climate, missing enrichment material and bad quality of feed and water were considered, in study 2 all these factors were checked at sample collection and ruled out as precursor for cannibalism. In both studies bacterial agents proved to be a crucial co-factor for the expansion of the necroses to deeper tissue layers, whereas viral pathogens were classified less important. In both projects it was not possible to estimate the direct impact of infectious agents and mycotoxins as direct trigger of the necroses as well as their participation as co-factors or precursor in the sense of an immunosuppression or previous damage of blood vessels or tissue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE To investigate the planning of subgroup analyses in protocols of randomised controlled trials and the agreement with corresponding full journal publications. DESIGN Cohort of protocols of randomised controlled trial and subsequent full journal publications. SETTING Six research ethics committees in Switzerland, Germany, and Canada. DATA SOURCES 894 protocols of randomised controlled trial involving patients approved by participating research ethics committees between 2000 and 2003 and 515 subsequent full journal publications. RESULTS Of 894 protocols of randomised controlled trials, 252 (28.2%) included one or more planned subgroup analyses. Of those, 17 (6.7%) provided a clear hypothesis for at least one subgroup analysis, 10 (4.0%) anticipated the direction of a subgroup effect, and 87 (34.5%) planned a statistical test for interaction. Industry sponsored trials more often planned subgroup analyses compared with investigator sponsored trials (195/551 (35.4%) v 57/343 (16.6%), P<0.001). Of 515 identified journal publications, 246 (47.8%) reported at least one subgroup analysis. In 81 (32.9%) of the 246 publications reporting subgroup analyses, authors stated that subgroup analyses were prespecified, but this was not supported by 28 (34.6%) corresponding protocols. In 86 publications, authors claimed a subgroup effect, but only 36 (41.9%) corresponding protocols reported a planned subgroup analysis. CONCLUSIONS Subgroup analyses are insufficiently described in the protocols of randomised controlled trials submitted to research ethics committees, and investigators rarely specify the anticipated direction of subgroup effects. More than one third of statements in publications of randomised controlled trials about subgroup prespecification had no documentation in the corresponding protocols. Definitive judgments regarding credibility of claimed subgroup effects are not possible without access to protocols and analysis plans of randomised controlled trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PROBLEM Given the important role of regulatory T cells (Treg) for successful pregnancy, the ability of soluble maternal and fetal pregnancy factors to induce human Treg was investigated. METHOD OF STUDY Peripheral blood mononuclear cells (PBMCs) or isolated CD4+CD25‒ cells were cultured in the presence of pooled second or third trimester pregnancy sera, steroid hormones or supernatants from placental explants, and the numbers and function of induced CD4+CD25+FOXP3+ Treg were analysed. RESULTS Third trimester pregnancy sera and supernatants of early placental explants, but not sex steroid hormones, induced an increase of Tregs from PBMCs. Early placental supernatant containing high levels of tumour necrosis factor-α, interferon-γ, interleukins -1, -6 and -17, soluble human leucocyte antigen-G, and transforming growth factor-β1, increased the proportion of Treg most effectively and was able to induce interleukin-10-secreting-Treg from CD4+CD25‒cells. CONCLUSIONS Compared with circulating maternal factors, placental- and fetal-derived factors appear to exert a more powerful effect on numerical changes of Treg, thereby supporting fetomaternal tolerance during human pregnancy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Limitations in the primary studies constitute one important factor to be considered in the grading of recommendations assessment, development, and evaluation (GRADE) system of rating quality of evidence. However, in the network meta-analysis (NMA), such evaluation poses a special challenge because each network estimate receives different amounts of contributions from various studies via direct as well as indirect routes and because some biases have directions whose repercussion in the network can be complicated. FINDINGS In this report we use the NMA of maintenance pharmacotherapy of bipolar disorder (17 interventions, 33 studies) and demonstrate how to quantitatively evaluate the impact of study limitations using netweight, a STATA command for NMA. For each network estimate, the percentage of contributions from direct comparisons at high, moderate or low risk of bias were quantified, respectively. This method has proven flexible enough to accommodate complex biases with direction, such as the one due to the enrichment design seen in some trials of bipolar maintenance pharmacotherapy. CONCLUSIONS Using netweight, therefore, we can evaluate in a transparent and quantitative manner how study limitations of individual studies in the NMA impact on the quality of evidence of each network estimate, even when such limitations have clear directions.