841 resultados para Systematic analysis
Resumo:
Purpose: The aim of this review was to systematically evaluate and compare the frequency of veneer chipping and core fracture of zirconia fixed dental prostheses (FOPS) and porcelain-fused-to-metal (PFM) FDPs and determine possible influencing factors. Materials and Methods: The SCOPUS database and International Association of Dental Research abstracts were searched for clinical studies involving zirconia and PFM FDPs. Furthermore, studies that were integrated into systematic reviews on PFM FDPs were also evaluated. The principle investigators of any clinical studies on zirconia FDPs were contacted to provide additional information. Based on the available information for each FOP, a data file was constructed. Veneer chipping was divided into three grades (grade 1 = polishing, grade 2 = repair, grade 3 = replacement). To assess the frequency of veneer chipping and possible influencing factors, a piecewise exponential model was used to adjust for a study effect. Results: None of the studies on PFM FDPs (reviews and additional searching) sufficiently satisfied the criteria of this review to be included. Thirteen clinical studies on zirconia FDPs and two studies that investigated both zirconia and PFM FDPs were identified. These studies involved 664 zirconia and 134 PFM FDPs at baseline. Follow-up data were available for 595 zirconia and 127 PFM FDPs. The mean observation period was approximately 3 years for both groups. The frequency of core fracture was less than 1% in the zirconia group and 0% in the PFM group. When all studies were included, 142 veneer chippings were recorded for zirconia FDPs (24%) and 43 for PFM FDPs (34%). However, the studies differed extensively with regard to veneer chipping of zirconia: 85% of all chippings occurred in 4 studies, and 43% of all chippings included zirconia FDPs. If only studies that evaluated both types of core materials were included, the frequency of chipping was 54% for the zirconia-supported FDPs and 34% for PFM FDPs. When adjusting the survival rate for the study effect, the difference between zirconia and PFM FDPs was statistically significant for all grades of chippings (P = .001), as well as for chipping grade 3 (P = .02). If all grades of veneer chippings were taken into account, the survival of PFM FDPs was 97%, while the survival rate of the zirconia FDPs was 90% after 3 years for a typical study. For both PFM and zirconia FDPs, the frequency of grades 1 and 2 veneer chippings was considerably higher than grade 3. Veneer chipping was significantly less frequent in pressed materials than in hand-layered materials, both for zirconia and PFM FDPs (P = .04). Conclusions: Since the frequency of veneer chipping was significantly higher in the zirconia FDPs than PFM FDPs, and as refined processing procedures have started to yield better results in the laboratory, new clinical studies with these new procedures must confirm whether the frequency of veneer chipping can be reduced to the level of PFM. Int J Prosthodont 2010;23:493-502
Resumo:
False identity documents represent a serious threat through their production and use in organized crime and by terrorist organizations. The present-day fight against this criminal problem and threats to national security does not appropriately address the organized nature of this criminal activity, treating each fraudulent document on its own during investigation and the judicial process, which causes linkage blindness and restrains the analysis capacity. Given the drawbacks of this case-by-case approach, this article proposes an original model in which false identity documents are used to inform a systematic forensic intelligence process. The process aims to detect links, patterns, and tendencies among false identity documents in order to support strategic and tactical decision making, thus sustaining a proactive intelligence-led approach to fighting identity document fraud and the associated organized criminality. This article formalizes both the model and the process, using practical applications to illustrate its powerful capabilities. This model has a general application and can be transposed to other fields of forensic science facing similar difficulties.
Resumo:
BACKGROUND: Selective publication of studies, which is commonly called publication bias, is widely recognized. Over the years a new nomenclature for other types of bias related to non-publication or distortion related to the dissemination of research findings has been developed. However, several of these different biases are often still summarized by the term 'publication bias'. METHODS/DESIGN: As part of the OPEN Project (To Overcome failure to Publish nEgative fiNdings) we will conduct a systematic review with the following objectives:- To systematically review highly cited articles that focus on non-publication of studies and to present the various definitions of biases related to the dissemination of research findings contained in the articles identified.- To develop and discuss a new framework on nomenclature of various aspects of distortion in the dissemination process that leads to public availability of research findings in an international group of experts in the context of the OPEN Project.We will systematically search Web of Knowledge for highly cited articles that provide a definition of biases related to the dissemination of research findings. A specifically designed data extraction form will be developed and pilot-tested. Working in teams of two, we will independently extract relevant information from each eligible article.For the development of a new framework we will construct an initial table listing different levels and different hazards en route to making research findings public. An international group of experts will iteratively review the table and reflect on its content until no new insights emerge and consensus has been reached. DISCUSSION: Results are expected to be publicly available in mid-2013. This systematic review together with the results of other systematic reviews of the OPEN project will serve as a basis for the development of future policies and guidelines regarding the assessment and prevention of publication bias.
Resumo:
BACKGROUND: Classical disease phenotypes are mainly based on descriptions of symptoms and the hypothesis that a given pattern of symptoms provides a diagnosis. With refined technologies there is growing evidence that disease expression in patients is much more diverse and subtypes need to be defined to allow a better targeted treatment. One of the aims of the Mechanisms of the Development of Allergy Project (MeDALL,FP7) is to re-define the classical phenotypes of IgE-associated allergic diseases from birth to adolescence, by consensus among experts using a systematic review of the literature and identify possible gaps in research for new disease markers. This paper describes the methods to be used for the systematic review of the classical IgE-associated phenotypes applicable in general to other systematic reviews also addressing phenotype definitions based on evidence. METHODS/DESIGN: Eligible papers were identified by PubMed search (complete database through April 2011). This search yielded 12,043 citations. The review includes intervention studies (randomized and clinical controlled trials) and observational studies (cohort studies including birth cohorts, case-control studies) as well as case series. Systematic and non-systematic reviews, guidelines, position papers and editorials are not excluded but dealt with separately. Two independent reviewers in parallel conducted consecutive title and abstract filtering scans. For publications where title and abstract fulfilled the inclusion criteria the full text was assessed. In the final step, two independent reviewers abstracted data using a pre-designed data extraction form with disagreements resolved by discussion among investigators. DISCUSSION: The systematic review protocol described here allows to generate broad,multi-phenotype reviews and consensus phenotype definitions. The in-depth analysis of the existing literature on the classification of IgE-associated allergic diseases through such a systematic review will 1) provide relevant information on the current epidemiologic definitions of allergic diseases, 2) address heterogeneity and interrelationships and 3) identify gaps in knowledge.
Resumo:
Osteoporosis is a systemic bone disease that is characterized by a generalized reduction of the bone mass. It is the main cause of fractures in elderly women. Bone densitometry is used in the lumbar spine and hip in order to detect osteoporosis in its early stages. Different studies have observed a correlation between the bone mineral density of the jaw (BMD) and that of the lumbar spine and/or hip. On the other hand, there are studies that evaluate the findings in the orthopantomograms and perapical X-rays, correlating them with the early diagnosis of osteoporosis and highlighting the role of the dentist in the early diagnosis of this disease. Materials and methods: A search was carried out in the Medline-Pubmed database in order to identify those articles that deal with the association between the X-ray findings observed in the orthopantomograms and the diagnosis of the osteoporosis, as well as those that deal with the bone mineral density of the jaw. Results: There were 406 articles, and with the limits established, this number was reduced to 21. Almost all of the articles indicate that when examining oral X-rays, it is possible to detect signs indicative of osteoporosis. Discussion: The radiomorphometric indices use measurements in orthopantomograms and evaluate possible loss of bone mineral density. They can be analyzed alone or along with the visual indices. In the periapical X-rays, the photodensimetric analyses and the trabecular pattern appear to be the most useful. There are seven studies that analyze the densitometry of the jaw, but only three do so independently of the photodensitometric analysis. Conclusions: The combination of mandibular indices, along with surveys on the risk of fracture, can be useful as indicators of early diagnosis of osteoporosis. Visual and morphometric indices appear to be especially important in the orthopantomograms. Photodensitometry indices and the trabecular pattern are used in periapical X-rays. Studies on mandibular dual-energy X-ray absorptiometry are inconclusive
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Requirements-relatedissues have been found the third most important risk factor in software projects and as the biggest reason for software project failures. This is not a surprise since; requirements engineering (RE) practices have been reported deficient inmore than 75% of all; enterprises. A problem analysis on small and low maturitysoftware organizations revealed two; central reasons for not starting process improvement efforts: lack of resources and uncertainty; about process improvementeffort paybacks.; In the constructive part of the study a basic RE method, BaRE, was developed to provide an; easy to adopt way to introduce basic systematic RE practices in small and low maturity; organizations. Based on diffusion of innovations literature, thirteen desirable characteristics; were identified for the solution and the method was implemented in five key components:; requirements document template, requirements development practices, requirements; management practices, tool support for requirements management, and training.; The empirical evaluation of the BaRE method was conducted in three industrial case studies. In; this evaluation, two companies established a completely new RE infrastructure following the; suggested practices while the third company conducted continued requirements document; template development based on the provided template and used it extensively in practice. The; real benefits of the adoption of the method were visible in the companies in four to six months; from the start of the evaluation project, and the two small companies in the project completed; their improvement efforts with an input equal to about one person month. The collected dataon; the case studies indicates that the companies implemented new practices with little adaptations; and little effort. Thus it can be concluded that the constructed BaRE method is indeed easy to; adopt and it can help introduce basic systematic RE practices in small organizations.
Resumo:
Background: Optimization methods allow designing changes in a system so that specific goals are attained. These techniques are fundamental for metabolic engineering. However, they are not directly applicable for investigating the evolution of metabolic adaptation to environmental changes. Although biological systems have evolved by natural selection and result in well-adapted systems, we can hardly expect that actual metabolic processes are at the theoretical optimum that could result from an optimization analysis. More likely, natural systems are to be found in a feasible region compatible with global physiological requirements. Results: We first present a new method for globally optimizing nonlinear models of metabolic pathways that are based on the Generalized Mass Action (GMA) representation. The optimization task is posed as a nonconvex nonlinear programming (NLP) problem that is solved by an outer- approximation algorithm. This method relies on solving iteratively reduced NLP slave subproblems and mixed-integer linear programming (MILP) master problems that provide valid upper and lower bounds, respectively, on the global solution to the original NLP. The capabilities of this method are illustrated through its application to the anaerobic fermentation pathway in Saccharomyces cerevisiae. We next introduce a method to identify the feasibility parametric regions that allow a system to meet a set of physiological constraints that can be represented in mathematical terms through algebraic equations. This technique is based on applying the outer-approximation based algorithm iteratively over a reduced search space in order to identify regions that contain feasible solutions to the problem and discard others in which no feasible solution exists. As an example, we characterize the feasible enzyme activity changes that are compatible with an appropriate adaptive response of yeast Saccharomyces cerevisiae to heat shock Conclusion: Our results show the utility of the suggested approach for investigating the evolution of adaptive responses to environmental changes. The proposed method can be used in other important applications such as the evaluation of parameter changes that are compatible with health and disease states.
Resumo:
BACKGROUND: The mean age of acute dengue has undergone a shift towards older ages. This fact points towards the relevance of assessing the influence of age-related comorbidities, such as diabetes, on the clinical presentation of dengue episodes. Identification of factors associated with a severe presentation is of high relevance, because timely treatment is the most important intervention to avert complications and death. This review summarizes and evaluates the published evidence on the association between diabetes and the risk of a severe clinical presentation of dengue. METHODOLOGY/FINDINGS: A systematic literature review was conducted using the MEDLINE database to access any relevant association between dengue and diabetes. Five case-control studies (4 hospital-based, 1 population-based) compared the prevalence of diabetes (self-reported or abstracted from medical records) of persons with dengue (acute or past; controls) and patients with severe clinical manifestations. All except one study were conducted before 2009 and all studies collected information towards WHO 1997 classification system. The reported odds ratios were formally summarized by random-effects meta-analyses. A diagnosis of diabetes was associated with an increased risk for a severe clinical presentation of dengue (OR 1.75; 95% CI: 1.08-2.84, p = 0.022). CONCLUSIONS/SIGNIFICANCE: Large prospective studies that systematically and objectively obtain relevant signs and symptoms of dengue fever episodes as well as of hyperglycemia in the past, and at the time of dengue diagnosis, are needed to properly address the effect of diabetes on the clinical presentation of an acute dengue fever episode. The currently available epidemiological evidence is very limited and only suggestive. The increasing global prevalence of both dengue and diabetes justifies further studies. At this point, confirmation of dengue infection as early as possible in diabetes patients with fever if living in dengue endemic regions seems justified. The presence of this co-morbidity may warrant closer observation for glycemic control and adapted fluid management to diminish the risk for a severe clinical presentation of dengue.
Resumo:
CONTEXT: The current standard for diagnosing prostate cancer in men at risk relies on a transrectal ultrasound-guided biopsy test that is blind to the location of the cancer. To increase the accuracy of this diagnostic pathway, a software-based magnetic resonance imaging-ultrasound (MRI-US) fusion targeted biopsy approach has been proposed. OBJECTIVE: Our main objective was to compare the detection rate of clinically significant prostate cancer with software-based MRI-US fusion targeted biopsy against standard biopsy. The two strategies were also compared in terms of detection of all cancers, sampling utility and efficiency, and rate of serious adverse events. The outcomes of different targeted approaches were also compared. EVIDENCE ACQUISITION: We performed a systematic review of PubMed/Medline, Embase (via Ovid), and Cochrane Review databases in December 2013 following the Preferred Reported Items for Systematic reviews and Meta-analysis statement. The risk of bias was evaluated using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. EVIDENCE SYNTHESIS: Fourteen papers reporting the outcomes of 15 studies (n=2293; range: 13-582) were included. We found that MRI-US fusion targeted biopsies detect more clinically significant cancers (median: 33.3% vs 23.6%; range: 13.2-50% vs 4.8-52%) using fewer cores (median: 9.2 vs 37.1) compared with standard biopsy techniques, respectively. Some studies showed a lower detection rate of all cancer (median: 50.5% vs 43.4%; range: 23.7-82.1% vs 14.3-59%). MRI-US fusion targeted biopsy was able to detect some clinically significant cancers that would have been missed by using only standard biopsy (median: 9.1%; range: 5-16.2%). It was not possible to determine which of the two biopsy approaches led most to serious adverse events because standard and targeted biopsies were performed in the same session. Software-based MRI-US fusion targeted biopsy detected more clinically significant disease than visual targeted biopsy in the only study reporting on this outcome (20.3% vs 15.1%). CONCLUSIONS: Software-based MRI-US fusion targeted biopsy seems to detect more clinically significant cancers deploying fewer cores than standard biopsy. Because there was significant study heterogeneity in patient inclusion, definition of significant cancer, and the protocol used to conduct the standard biopsy, these findings need to be confirmed by further large multicentre validating studies. PATIENT SUMMARY: We compared the ability of standard biopsy to diagnose prostate cancer against a novel approach using software to overlay the images from magnetic resonance imaging and ultrasound to guide biopsies towards the suspicious areas of the prostate. We found consistent findings showing the superiority of this novel targeted approach, although further high-quality evidence is needed to change current practice.
Resumo:
Randomized, controlled trials have demonstrated efficacy for second-generation antipsychotics in the treatment of acute mania in bipolar disorder. Despite depression being considered the hallmark of bipolar disorder, there are no published systematic reviews or meta-analyses to evaluate the efficacy of modern atypical antipsychotics in bipolar depression. We systematically reviewed published or registered randomized, double-blind, placebo-controlled trials (RCTs) of modern antipsychotics in adult bipolar I and/or II depressive patients (DSM-IV criteria). Efficacy outcomes were assessed based on changes in the Montgomery-Asberg Depression Rating Scale (MADRS) during an 8-wk period. Data were combined through meta-analysis using risk ratio as an effect size with a 95% confidence interval (95% CI) and with a level of statistical significance of 5% (p<0.05). We identified five RCTs; four involved antipsychotic monotherapy and one addressed both monotherapy and combination with an antidepressant. The two quetiapine trials analysed the safety and efficacy of two doses: 300 and 600 mg/d. The only olanzapine trial assessed olanzapine monotherapy within a range of 5-20 mg/d and olanzapine-fluoxetine combination within a range of 5-20 mg/d and 6-12 mg/d, respectively. The two aripiprazole placebo-controlled trials assessed doses of 5-30 mg/d. Quetiapine and olanzapine trials (3/5, 60%) demonstrated superiority over placebo (p<0.001). Only 2/5 (40%) (both aripiprazole trials) failed in the primary efficacy measure after the first 6 wk. Some modern antipsychotics (quetiapine and olanzapine) have demonstrated efficacy in bipolar depressive patients from week 1 onwards. Rapid onset of action seems to be a common feature of atypical antipsychotics in bipolar depression. Comment in The following popper user interface control may not be accessible. Tab to the next button to revert the control to an accessible version.Destroy user interface controlEfficacy of modern antipsychotics in placebo-controlled trials in bipolar depression: a meta-analysis--results to be interpreted with caution.
Resumo:
OBJECTIVE: The objective was to determine the risk of stroke associated with subclinical hypothyroidism. DATA SOURCES AND STUDY SELECTION: Published prospective cohort studies were identified through a systematic search through November 2013 without restrictions in several databases. Unpublished studies were identified through the Thyroid Studies Collaboration. We collected individual participant data on thyroid function and stroke outcome. Euthyroidism was defined as TSH levels of 0.45-4.49 mIU/L, and subclinical hypothyroidism was defined as TSH levels of 4.5-19.9 mIU/L with normal T4 levels. DATA EXTRACTION AND SYNTHESIS: We collected individual participant data on 47 573 adults (3451 subclinical hypothyroidism) from 17 cohorts and followed up from 1972-2014 (489 192 person-years). Age- and sex-adjusted pooled hazard ratios (HRs) for participants with subclinical hypothyroidism compared to euthyroidism were 1.05 (95% confidence interval [CI], 0.91-1.21) for stroke events (combined fatal and nonfatal stroke) and 1.07 (95% CI, 0.80-1.42) for fatal stroke. Stratified by age, the HR for stroke events was 3.32 (95% CI, 1.25-8.80) for individuals aged 18-49 years. There was an increased risk of fatal stroke in the age groups 18-49 and 50-64 years, with a HR of 4.22 (95% CI, 1.08-16.55) and 2.86 (95% CI, 1.31-6.26), respectively (p trend 0.04). We found no increased risk for those 65-79 years old (HR, 1.00; 95% CI, 0.86-1.18) or ≥ 80 years old (HR, 1.31; 95% CI, 0.79-2.18). There was a pattern of increased risk of fatal stroke with higher TSH concentrations. CONCLUSIONS: Although no overall effect of subclinical hypothyroidism on stroke could be demonstrated, an increased risk in subjects younger than 65 years and those with higher TSH concentrations was observed.
Resumo:
AIMS: Published incidences of acute mountain sickness (AMS) vary widely. Reasons for this variation, and predictive factors of AMS, are not well understood. We aimed to identify predictive factors that are associated with the occurrence of AMS, and to test the hypothesis that study design is an independent predictive factor of AMS incidence. We did a systematic search (Medline, bibliographies) for relevant articles in English or French, up to April 28, 2013. Studies of any design reporting on AMS incidence in humans without prophylaxis were selected. Data on incidence and potential predictive factors were extracted by two reviewers and crosschecked by four reviewers. Associations between predictive factors and AMS incidence were sought through bivariate and multivariate analyses for different study designs separately. Association between AMS incidence and study design was assessed using multiple linear regression. RESULTS: We extracted data from 53,603 subjects from 34 randomized controlled trials, 44 cohort studies, and 33 cross-sectional studies. In randomized trials, the median of AMS incidences without prophylaxis was 60% (range, 16%-100%); mode of ascent and population were significantly associated with AMS incidence. In cohort studies, the median of AMS incidences was 51% (0%-100%); geographical location was significantly associated with AMS incidence. In cross-sectional studies, the median of AMS incidences was 32% (0%-68%); mode of ascent and maximum altitude were significantly associated with AMS incidence. In a multivariate analysis, study design (p=0.012), mode of ascent (p=0.003), maximum altitude (p<0.001), population (p=0.002), and geographical location (p<0.001) were significantly associated with AMS incidence. Age, sex, speed of ascent, duration of exposure, or history of AMS were inconsistently reported and therefore not further analyzed. CONCLUSIONS: Reported incidences and identifiable predictive factors of AMS depend on study design.
Resumo:
Many therapies have been proposed for the management of temporomandibular disorders, including the use of different drugs. However, lack of knowledge about the mechanisms behind the pain associated with this pathology, and the fact that the studies carried out so far use highly disparate patient selection criteria, mean that results on the effectiveness of the different medications are inconclusive. This study makes a systematic review of the literature published on the use of tricyclic antidepressants for the treatment of temporomandibular disorders, using the SORT criteria (Strength of recommendation taxonomy) to consider the level of scientific evidence of the different studies. Following analysis of the articles, and in function of their scientific quality, a type B recommendation is given in favor of the use of tricyclic antidepressants for the treatment of temporomandibular disorders.
Resumo:
BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.