822 resultados para Weights of evidence
Resumo:
BACKGROUND The purpose of the present study is to translate and validate the "Hip and Knee Outcomes Questionnaire", developed in English, into Spanish. The 'Hip and Knee Outcomes Questionnaire is a questionnaire planned to evaluate the impact in quality of life of any problem related to the human musculoskeletal system. 10 scientific associations developed it. METHODS The questionnaire underwent a validated translation/retro-translation process. Patients undergoing primary knee arthroplasty, before and six months postoperative, tested the final version in Spanish. Psychometric properties of feasibility, reliability, validity and sensitivity to change were assessed. Convergent validity with SF-36 and WOMAC questionnaires was evaluated. RESULTS 316 patients were included. Feasibility: a high number of missing items in questions 3, 4 and 5 were observed. The number of patients with a missing item was 171 (51.35%) in the preoperative visit and 139 (44.0%) at the postoperative. Internal validity: revision of coefficients in the item-rest correlation recommended removing question 6 during the preoperative visit (coefficient <0.20). Convergent validity: coefficients of correlation with WOMAC and SF-36 scales confirm the questionnaire's validity. Sensitivity to change: statistically significant differences were found between the mean scores of the first visit compared to the postoperative. CONCLUSION The proposed translation to Spanish of the 'Hip and Knee Questionnaire' is found to be reliable, valid and sensible to changes produced at the clinical practice of patients undergoing primary knee arthroplasty. However, some changes at the completion instructions are recommended. LEVEL OF EVIDENCE Level I. Prognostic study.
Resumo:
Gastric (GC) and breast (BrC) cancer are two of the most common and deadly tumours. Different lines of evidence suggest a possible causative role of viral infections for both GC and BrC. Wide genome sequencing (WGS) technologies allow searching for viral agents in tissues of patients with cancer. These technologies have already contributed to establish virus-cancer associations as well as to discovery new tumour viruses. The objective of this study was to document possible associations of viral infection with GC and BrC in Mexican patients. In order to gain idea about cost effective conditions of experimental sequencing, we first carried out an in silico simulation of WGS. The next-generation-platform IlluminaGallx was then used to sequence GC and BrC tumour samples. While we did not find viral sequences in tissues from BrC patients, multiple reads matching Epstein-Barr virus (EBV) sequences were found in GC tissues. An end-point polymerase chain reaction confirmed an enrichment of EBV sequences in one of the GC samples sequenced, validating the next-generation sequencing-bioinformatics pipeline.
Resumo:
BACKGROUND Most textbooks contains messages relating to health. This profuse information requires analysis with regards to the quality of such information. The objective was to identify the scientific evidence on which the health messages in textbooks are based. METHODS The degree of evidence on which such messages are based was identified and the messages were subsequently classified into three categories: Messages with high, medium or low levels of evidence; Messages with an unknown level of evidence; and Messages with no known evidence. RESULTS 844 messages were studied. Of this total, 61% were classified as messages with an unknown level of evidence. Less than 15% fell into the category where the level of evidence was known and less than 6% were classified as possessing high levels of evidence. More than 70% of the messages relating to "Balanced Diets and Malnutrition", "Food Hygiene", "Tobacco", "Sexual behaviour and AIDS" and "Rest and ergonomics" are based on an unknown level of evidence. "Oral health" registered the highest percentage of messages based on a high level of evidence (37.5%), followed by "Pregnancy and newly born infants" (35%). Of the total, 24.6% are not based on any known evidence. Two of the messages appeared to contravene known evidence. CONCLUSION Many of the messages included in school textbooks are not based on scientific evidence. Standards must be established to facilitate the production of texts that include messages that are based on the best available evidence and which can improve children's health more effectively.
Resumo:
INTRODUCTION Evidence-based recommendations are needed to guide the acute management of the bleeding trauma patient, which when implemented may improve patient outcomes. METHODS The multidisciplinary Task Force for Advanced Bleeding Care in Trauma was formed in 2005 with the aim of developing a guideline for the management of bleeding following severe injury. This document presents an updated version of the guideline published by the group in 2007. Recommendations were formulated using a nominal group process, the Grading of Recommendations Assessment, Development and Evaluation (GRADE) hierarchy of evidence and based on a systematic review of published literature. RESULTS Key changes encompassed in this version of the guideline include new recommendations on coagulation support and monitoring and the appropriate use of local haemostatic measures, tourniquets, calcium and desmopressin in the bleeding trauma patient. The remaining recommendations have been reevaluated and graded based on literature published since the last edition of the guideline. Consideration was also given to changes in clinical practice that have taken place during this time period as a result of both new evidence and changes in the general availability of relevant agents and technologies. CONCLUSIONS This guideline provides an evidence-based multidisciplinary approach to the management of critically injured bleeding trauma patients.
Resumo:
ABSTRACT Innovation is essential for improving organizational performance in both the private and public sectors. This article describes and analyzes the 323 innovation experiences of the Brazilian federal public service that received prizes during the 16 annual competitions (from 1995 to 2012) of the Award for Innovation in Federal Public Management held by the Brazilian National School of Public Administration (ENAP). It is a qualitative and quantitative study in which were employed as categories for analysis the four types of innovation defined in the Copenhagen Manual: product, process, organizational and communication. The survey results allow us to affirm that there is innovation in the public sector, in spite of the skepticism of some researchers and the incipient state of theoretical research on the subject. It was possible to observe that organizational innovation was the one with the highest number of award- -winning experience, followed respectively by process, communication and product innovation, with citizen services and improvement of work processes being the main highlights. The results showed that, although the high incidence of innovation occurs at the national level, a significant number of innovations also occur at the local level, probably because many organizations of the federal government have their actions spread only at this level of government. Concerning the innovative area, health and education predominate, with almost 33% of initiatives, which can be explained by capillarity of these areas and the fact that both maintain a strong interaction with the user. The contributions of this work include the use of theoretical model of innovation analysis in the public sector in Brazil still upcoming, and the systematization of knowledge in empirical basis for this innovation. In this sense, it also contributes to the development of the theory with the presentation of evidence that the characteristics, determinants and consequences of innovation in the public sector differ not only from innovation in the industry, but also from innovation in services in the private sector.
Resumo:
This paper questions the practitioners' deterministic approach(es) in forensic identification and notes the limits of their conclusions in order to encourage a discussion to question current practices. With this end in view, a hypothetical discussion between an expert in dentistry and an enthusiastic member of a jury, eager to understand the scientific principles of evidence interpretation, is presented. This discussion will lead us to regard any argument aiming at identification as probabilistic.
Resumo:
Most of oral targeted therapies are tyrosine kinase inhibitors (TKIs). Oral administration generates a complex step in the pharmacokinetics (PK) of these drugs. Inter-individual PK variability is often large and variability observed in response is influenced not only by the genetic heterogeneity of drug targets, but also by the pharmacogenetic background of the patient (e.g. cytochome P450 and ABC transporter polymorphisms), patient characteristics such as adherence to treatment and environmental factors (drug-drug interactions). Retrospective studies have shown that targeted drug exposure, reflected in the area under the plasma concentration-time curve (AUC) correlates with treatment response (efficacy/toxicity) in various cancers. Nevertheless levels of evidence for therapeutic drug monitoring (TDM) are however heterogeneous among these agents and TDM is still uncommon for the majority of them. Evidence for imatinib currently exists, others are emerging for compounds including nilotinib, dasatinib, erlotinib, sunitinib, sorafenib and mammalian target of rapamycin (mTOR) inhibitors. Applications for TDM during oral targeted therapies may best be reserved for particular situations including lack of therapeutic response, severe or unexpected toxicities, anticipated drug-drug interactions and/or concerns over adherence treatment. Interpatient PK variability observed with monoclonal antibodies (mAbs) is comparable or slightly lower to that observed with TKIs. There are still few data with these agents in favour of TDM approaches, even if data showed encouraging results with rituximab, cetuximab and bevacizumab. At this time, TDM of mAbs is not yet supported by scientific evidence. Considerable effort should be made for targeted therapies to better define concentration-effect relationships and to perform comparative randomised trials of classic dosing versus pharmacokinetically-guided adaptive dosing.
Resumo:
BACKGROUND: Drug-resistant human immunodeficiency virus type 1 (HIV-1) minority variants (MVs) are present in some antiretroviral therapy (ART)-naive patients. They may result from de novo mutagenesis or transmission. To date, the latter has not been proven. METHODS: MVs were quantified by allele-specific polymerase chain reaction in 204 acute or recent seroconverters from the Zurich Primary HIV Infection study and 382 ART-naive, chronically infected patients. Phylogenetic analyses identified transmission clusters. RESULTS: Three lines of evidence were observed in support of transmission of MVs. First, potential transmitters were identified for 12 of 16 acute or recent seroconverters harboring M184V MVs. These variants were also detected in plasma and/or peripheral blood mononuclear cells at the estimated time of transmission in 3 of 4 potential transmitters who experienced virological failure accompanied by the selection of the M184V mutation before transmission. Second, prevalence between MVs harboring the frequent mutation M184V and the particularly uncommon integrase mutation N155H differed highly significantly in acute or recent seroconverters (8.2% vs 0.5%; P < .001). Third, the prevalence of less-fit M184V MVs is significantly higher in acutely or recently than in chronically HIV-1-infected patients (8.2% vs 2.5%; P = .004). CONCLUSIONS: Drug-resistant HIV-1 MVs can be transmitted. To what extent the origin-transmission vs sporadic appearance-of these variants determines their impact on ART needs to be further explored.
Resumo:
Functional connectivity (FC) as measured by correlation between fMRI BOLD time courses of distinct brain regions has revealed meaningful organization of spontaneous fluctuations in the resting brain. However, an increasing amount of evidence points to non-stationarity of FC; i.e., FC dynamically changes over time reflecting additional and rich information about brain organization, but representing new challenges for analysis and interpretation. Here, we propose a data-driven approach based on principal component analysis (PCA) to reveal hidden patterns of coherent FC dynamics across multiple subjects. We demonstrate the feasibility and relevance of this new approach by examining the differences in dynamic FC between 13 healthy control subjects and 15 minimally disabled relapse-remitting multiple sclerosis patients. We estimated whole-brain dynamic FC of regionally-averaged BOLD activity using sliding time windows. We then used PCA to identify FC patterns, termed "eigenconnectivities", that reflect meaningful patterns in FC fluctuations. We then assessed the contributions of these patterns to the dynamic FC at any given time point and identified a network of connections centered on the default-mode network with altered contribution in patients. Our results complement traditional stationary analyses, and reveal novel insights into brain connectivity dynamics and their modulation in a neurodegenerative disease.
Resumo:
Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.
Resumo:
BACKGROUND: Practice guidelines for examining febrile patients presenting upon returning from the tropics were developed to assist primary care physicians in decision making. Because of the low level of evidence available in this field, there was a need to validate them and assess their feasibility in the context they have been designed for. OBJECTIVES: The objectives of the study were to (1) evaluate physicians' adherence to recommendations; (2) investigate reasons for non-adherence; and (3) ensure good clinical outcome of patients, the ultimate goal being to improve the quality of the guidelines, in particular to tailor them for the needs of the target audience and population. METHODS: Physicians consulting the guidelines on the Internet (www.fevertravel.ch) were invited to participate in the study. Navigation through the decision chart was automatically recorded, including diagnostic tests performed, initial and final diagnoses, and clinical outcomes. The reasons for non-adherence were investigated and qualitative feedback was collected. RESULTS: A total of 539 physician/patient pairs were included in this study. Full adherence to guidelines was observed in 29% of the cases. Figure-specific adherence rate was 54.8%. The main reasons for non-adherence were as follows: no repetition of malaria tests (111/352) and no presumptive antibiotic treatment for febrile diarrhea (64/153) or abdominal pain without leukocytosis (46/101). Overall, 20% of diversions from guidelines were considered reasonable because there was an alternative presumptive diagnosis or the symptoms were mild, which means that the corrected adherence rate per case was 40.6% and corrected adherence per figure was 61.7%. No death was recorded and all complications could be attributed to the underlying illness rather than to adherence to guidelines. CONCLUSIONS: These guidelines proved to be feasible, useful, and leading to good clinical outcomes. Almost one third of physicians strictly adhered to the guidelines. Other physicians used the guidelines not to forget specific diagnoses but finally diverged from the proposed attitudes. These diversions should be scrutinized for further refinement of the guidelines to better fit to physician and patient needs.
Resumo:
PURPOSE: The purpose of this study was to evaluate the clinical and subjective outcomes after arthroscopic-assisted double-bundle posterior cruciate ligament (PCL) reconstruction. METHODS: A series of 15 patients with grade III isolated chronic PCL tears underwent double-bundle PCL reconstruction. Of these patients, 8 (53%) had simultaneous fractures. The mean time from accident to surgery was 10.8 months (range, 8 to 15 months). The mean age at the time of surgery was 28.2 years (range, 17 to 43 years). All of the patients reported knee insecurity during activities of daily living or light sporting activities, with associated anterior knee pain in 5 patients. Preoperatively, posterolateral or posteromedial corner injuries were ruled out through accurate clinical examination. The knees were assessed before surgery and at a mean follow-up of 3.2 years (range, 2 to 5 years) with a physical examination, 4 different rating scales, and stress radiographs obtained with a Telos device (Telos, Marburg, Germany). RESULTS: Postoperative physical examination revealed a reduction of the posterior drawer and tibial step-off in all cases, although the posterior laxity was not completely normalized. Nevertheless, the patients were subjectively better after surgery. The subjective International Knee Documentation Committee score was significantly ameliorated. With regard to the objective International Knee Documentation Committee score, 6 knees (40%) were graded as abnormal because of posterior displacement of 6 mm or greater on follow-up stress radiographs with the Telos device. On the Lysholm knee scoring scale, the score was excellent in 13% of patients and good in 87%. The mean score on the Hospital for Special Surgery knee ligament rating scale was 85.8. The Tegner activity score showed an amelioration after surgery, but no patient resumed his or her preinjury level of activities. The postoperative stress radiographs revealed an improvement in posterior instability of 50% or more in all but 3 knees (20%). CONCLUSIONS: Our technique of double-bundle PCL reconstruction produced a significant reduction in knee symptoms and allowed the patients to return to moderate or strenuous activity, although the posterior tibial translation was not completely normalized and our results appear to be no better than the results of single-bundle PCL reconstruction. LEVEL OF EVIDENCE: Level IV, therapeutic case series.
Resumo:
Invasive opportunistic fungal diseases (IFDs) are important causes of morbidity and mortality in paediatric patients with cancer and those who have had an allogeneic haemopoietic stem-cell transplantation (HSCT). Apart from differences in underlying disorders and comorbidities relative to those of adults, IFDs in infants, children, and adolescents are unique with respect to their epidemiology, the usefulness of diagnostic methods, the pharmacology and dosing of antifungal agents, and the absence of interventional phase 3 clinical trials for guidance of evidence-based decisions. To better define the state of knowledge on IFDs in paediatric patients with cancer and allogeneic HSCT and to improve IFD diagnosis, prevention, and management, the Fourth European Conference on Infections in Leukaemia (ECIL-4) in 2011 convened a group that reviewed the scientific literature on IFDs and graded the available quality of evidence according to the Infectious Diseases Society of America grading system. The final considerations and recommendations of the group are summarised in this manuscript.
Resumo:
Almost 30 years ago, Bayesian networks (BNs) were developed in the field of artificial intelligence as a framework that should assist researchers and practitioners in applying the theory of probability to inference problems of more substantive size and, thus, to more realistic and practical problems. Since the late 1980s, Bayesian networks have also attracted researchers in forensic science and this tendency has considerably intensified throughout the last decade. This review article provides an overview of the scientific literature that describes research on Bayesian networks as a tool that can be used to study, develop and implement probabilistic procedures for evaluating the probative value of particular items of scientific evidence in forensic science. Primary attention is drawn here to evaluative issues that pertain to forensic DNA profiling evidence because this is one of the main categories of evidence whose assessment has been studied through Bayesian networks. The scope of topics is large and includes almost any aspect that relates to forensic DNA profiling. Typical examples are inference of source (or, 'criminal identification'), relatedness testing, database searching and special trace evidence evaluation (such as mixed DNA stains or stains with low quantities of DNA). The perspective of the review presented here is not exclusively restricted to DNA evidence, but also includes relevant references and discussion on both, the concept of Bayesian networks as well as its general usage in legal sciences as one among several different graphical approaches to evidence evaluation.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.