954 resultados para Web of Science
Resumo:
As environmental problems became more complex, policy and regulatory decisions become far more difficult to make. The use of science has become an important practice in the decision making process of many federal agencies. Many different types of scientific information are used to make decisions within the EPA, with computer models becoming especially important. Environmental models are used throughout the EPA in a variety of contexts and their predictive capacity has become highly valued in decision making. The main focus of this research is to examine the EPA’s Council for Regulatory Modeling (CREM) as a case study in addressing science issues, particularly models, in government agencies. Specifically, the goal was to answer the following questions: What is the history of the CREM and how can this information shed light on the process of science policy implementation? What were the goals of implementing the CREM? Were these goals reached and how have they changed? What have been the impediments that the CREM has faced and why did these impediments occur? The three main sources of information for this research came from observations during summer employment with the CREM, document review and supplemental interviews with CREM participants and other members of the modeling community. Examining a history of modeling at the EPA, as well as a history of the CREM, provides insight into the many challenges that are faced when implementing science policy and science policy programs. After examining the many impediments that the CREM has faced in implementing modeling policies, it was clear that the impediments fall into two separate categories, classic and paradoxical. The classic impediments include the more standard impediments to science policy implementation that might be found in any regulatory environment, such as lack of resources and changes in administration. Paradoxical impediments are cyclical in nature, with no clear solution, such as balancing top-down versus bottom-up initiatives and coping with differing perceptions. These impediments, when not properly addressed, severely hinder the ability for organizations to successfully implement science policy.
Resumo:
Presentation by Dr. Stephen Ditchkoff.
Resumo:
BACKGROUND: Psychological interventions for infertile patients seek to improve mental health and increase pregnancy rates. The aim of the present meta-analysis was to examine if psychological interventions improve mental health and pregnancy rate among infertile patients. Thus, controlled studies were pooled investigating psychological interventions following the introduction of assisted reproductive treatments (ART). METHODS: The databases of Medline, PsycINFO, PSYNDEX, Web of Science and the Cochrane Library were searched to identify relevant articles published between 1978 and 2007 (384 articles). Included were prospective intervention studies on infertile patients (women and men) receiving psychological interventions independent of actual medical treatment. The outcome measures were mental health and pregnancy rate. A total of 21 controlled studies were ultimately included in a meta-analysis comparing the efficacy of psychological interventions. Effect sizes (ES) were calculated for psychological measures and risk ratios (RR) for pregnancy rate. RESULTS: The findings from controlled studies indicated no significant effect for psychological interventions regarding mental health (depression: ES 0.02, 99% CI: -0.19, 0.24; anxiety: ES 0.16, 99% CI: -0.10, 0.42; mental distress: ES 0.08, 99% CI: -0.10, 0.51). Nevertheless, there was evidence for the positive impact of psychological interventions on pregnancy rates (RR 1.42, 99% CI: 1.02, 1.96). Concerning pregnancy rates, significant effects for psychological interventions were only found for couples not receiving ART. CONCLUSIONS: Despite the absence of clinical effects on mental health measures, psychological interventions were found to improve some patients' chances of becoming pregnant. Psychological interventions represent an attractive treatment option, in particular, for infertile patients who are not receiving medical treatment.
Resumo:
BACKGROUND Current guidelines for evaluating cleft palate treatments are mostly based on two-dimensional (2D) evaluation, but three-dimensional (3D) imaging methods to assess treatment outcome are steadily rising. OBJECTIVE To identify 3D imaging methods for quantitative assessment of soft tissue and skeletal morphology in patients with cleft lip and palate. DATA SOURCES Literature was searched using PubMed (1948-2012), EMBASE (1980-2012), Scopus (2004-2012), Web of Science (1945-2012), and the Cochrane Library. The last search was performed September 30, 2012. Reference lists were hand searched for potentially eligible studies. There was no language restriction. STUDY SELECTION We included publications using 3D imaging techniques to assess facial soft tissue or skeletal morphology in patients older than 5 years with a cleft lip with/or without cleft palate. We reviewed studies involving the facial region when at least 10 subjects in the sample size had at least one cleft type. Only primary publications were included. DATA EXTRACTION Independent extraction of data and quality assessments were performed by two observers. RESULTS Five hundred full text publications were retrieved, 144 met the inclusion criteria, with 63 high quality studies. There were differences in study designs, topics studied, patient characteristics, and success measurements; therefore, only a systematic review could be conducted. Main 3D-techniques that are used in cleft lip and palate patients are CT, CBCT, MRI, stereophotogrammetry, and laser surface scanning. These techniques are mainly used for soft tissue analysis, evaluation of bone grafting, and changes in the craniofacial skeleton. Digital dental casts are used to evaluate treatment and changes over time. CONCLUSION Available evidence implies that 3D imaging methods can be used for documentation of CLP patients. No data are available yet showing that 3D methods are more informative than conventional 2D methods. Further research is warranted to elucidate it.
Resumo:
CONTEXT Although open radical cystectomy (ORC) is still the standard approach, laparoscopic radical cystectomy (LRC) and robot-assisted radical cystectomy (RARC) are increasingly performed. OBJECTIVE To report on a systematic literature review and cumulative analysis of pathologic, oncologic, and functional outcomes of RARC in comparison with ORC and LRC. EVIDENCE ACQUISITION Medline, Scopus, and Web of Science databases were searched using a free-text protocol including the terms robot-assisted radical cystectomy or da Vinci radical cystectomy or robot* radical cystectomy. RARC case series and studies comparing RARC with either ORC or LRC were collected. A cumulative analysis was conducted. EVIDENCE SYNTHESIS The searches retrieved 105 papers, 87 of which reported on pathologic, oncologic, or functional outcomes. Most series were retrospective and had small case numbers, short follow-up, and potential patient selection bias. The lymph node yield during lymph node dissection was 19 (range: 3-55), with half of the series following an extended template (yield range: 11-55). The lymph node-positive rate was 22%. The performance of lymphadenectomy was correlated with surgeon and institutional volume. Cumulative analyses showed no significant difference in lymph node yield between RARC and ORC. Positive surgical margin (PSM) rates were 5.6% (1-1.5% in pT2 disease and 0-25% in pT3 and higher disease). PSM rates did not appear to decrease with sequential case numbers. Cumulative analyses showed no significant difference in rates of surgical margins between RARC and ORC or RARC and LRC. Neoadjuvant chemotherapy use ranged from 0% to 31%, with adjuvant chemotherapy used in 4-29% of patients. Only six series reported a mean follow-up of >36 mo. Three-year disease-free survival (DFS), cancer-specific survival (CSS), and overall survival (OS) rates were 67-76%, 68-83%, and 61-80%, respectively. The 5-yr DFS, CSS, and OS rates were 53-74%, 66-80%, and 39-66%, respectively. Similar to ORC, disease of higher pathologic stage or evidence of lymph node involvement was associated with worse survival. Very limited data were available with respect to functional outcomes. The 12-mo continence rates with continent diversion were 83-100% in men for daytime continence and 66-76% for nighttime continence. In one series, potency was recovered in 63% of patients who were evaluable at 12 mo. CONCLUSIONS Oncologic and functional data from RARC remain immature, and longer-term prospective studies are needed. Cumulative analyses demonstrated that lymph node yields and PSM rates were similar between RARC and ORC. Conclusive long-term survival outcomes for RARC were limited, although oncologic outcomes up to 5 yr were similar to those reported for ORC. PATIENT SUMMARY Although open radical cystectomy (RC) is still regarded as the standard treatment for muscle-invasive bladder cancer, laparoscopic and robot-assisted RCs are becoming more popular. Templates of lymph node dissection, lymph node yields, and positive surgical margin rates are acceptable with robot-assisted RC. Although definitive comparisons with open RC with respect to oncologic or functional outcomes are lacking, early results appear comparable.
Resumo:
Subclinical thyroid dysfunction has been associated with coronary heart disease, but the risk of stroke is unclear. Our aim is to combine the evidence on the association between subclinical thyroid dysfunction and the risk of stroke in prospective cohort studies. We searched Medline (OvidSP), Embase, Web-of-Science, Pubmed Publisher, Cochrane and Google Scholar from inception to November 2013 using a cohort filter, but without language restriction or other limitations. Reference lists of articles were searched. Two independent reviewers screened articles according to pre-specified criteria and selected prospective cohort studies with baseline thyroid function measurements and assessment of stroke outcomes. Data were derived using a standardized data extraction form. Quality was assessed according to previously defined quality indicators by two independent reviewers. We pooled the outcomes using a random-effects model. Of 2,274 articles screened, six cohort studies, including 11,309 participants with 665 stroke events, met the criteria. Four of six studies provided information on subclinical hyperthyroidism including a total of 6,029 participants and five on subclinical hypothyroidism (n = 10,118). The pooled hazard ratio (HR) was 1.08 (95 % CI 0.87-1.34) for subclinical hypothyroidism (I (2) of 0 %) and 1.17 (95 % CI 0.54-2.56) for subclinical hyperthyroidism (I (2) of 67 %) compared to euthyroidism. Subgroup analyses yielded similar results. Our systematic review provides no evidence supporting an increased risk for stroke associated with subclinical thyroid dysfunction. However, the available literature is insufficient and larger datasets are needed to perform extended analyses. Also, there were insufficient events to exclude clinically significant risk from subclinical hyperthyroidism, and more data are required for subgroup analyses.
Resumo:
BACKGROUND Anthelmintic drugs have been widely used in sheep as a cost-effective means for gastro-intestinal nematode (GIN) control. However, growing anthelmintic resistance (AHR) has created a compelling need to identify evidence-based management recommendations that reduce the risk of further development and impact of AHR. OBJECTIVE To identify, critically assess, and synthesize available data from primary research on factors associated with AHR in sheep. METHODS Publications reporting original observational or experimental research on selected factors associated with AHR in sheep GINs and published after 1974, were identified through two processes. Three electronic databases (PubMed, Agricola, CAB) and Web of Science (a collection of databases) were searched for potentially relevant publications. Additional publications were identified through consultation with experts, manual search of references of included publications and conference proceedings, and information solicited from small ruminant practitioner list-serves. Two independent investigators screened abstracts for relevance. Relevant publications were assessed for risk of systematic bias. Where sufficient data were available, random-effects Meta-Analyses (MAs) were performed to estimate the pooled Odds Ratio (OR) and 95% Confidence Intervals (CIs) of AHR for factors reported in ≥2 publications. RESULTS Of the 1712 abstracts screened for eligibility, 131 were deemed relevant for full publication review. Thirty publications describing 25 individual studies (15 observational studies, 7 challenge trials, and 3 controlled trials) were included in the qualitative synthesis and assessed for systematic bias. Unclear (i.e. not reported, or unable to assess) or high risk of selection bias and confounding bias was found in 93% (14/15) and 60% (9/15) of the observational studies, respectively, while unclear risk of selection bias was identified in all of the trials. Ten independent studies were included in the quantitative synthesis, and MAs were performed for five factors. Only high frequency of treatment was a significant risk factor (OR=4.39; 95% CI=1.59, 12.14), while the remaining 4 variables were marginally significant: mixed-species grazing (OR=1.63; 95% CI=0.66, 4.07); flock size (OR=1.02; 95% CI=0.97, 1.07); use of long-acting drug formulations (OR=2.85; 95% CI=0.79, 10.24); and drench-and-shift pasture management (OR=4.08; 95% CI=0.75, 22.16). CONCLUSIONS While there is abundant literature on the topic of AHR in sheep GINs, few studies have explicitly investigated the association between putative risk or protective factors and AHR. Consequently, several of the current recommendations on parasite management are not evidence-based. Moreover, many of the studies included in this review had a high or unclear risk of systematic bias, highlighting the need to improve study design and/or reporting of future research carried out in this field.
Resumo:
OBJECTIVES To systematically review the available literature on the influence of dental implant placement and loading protocols on peri-implant innervation. MATERIAL AND METHODS The database MEDLINE, Cochrane, EMBASE, Web of Science, LILACS, OpenGrey and hand searching were used to identify the studies published up to July 2013, with a populations, exposures and outcomes (PEO) search strategy using MeSH keywords, focusing on the question: Is there, and if so, what is the effect of time between tooth extraction and implant placement or implant loading on neural fibre content in the peri-implant hard and soft tissues? RESULTS Of 683 titles retrieved based on the standardized search strategy, only 10 articles fulfilled the inclusion criteria, five evaluating the innervation of peri-implant epithelium, five elucidating the sensory function in peri-implant bone. Three included studies were considered having a methodology of medium quality and the rest were at low quality. All those papers reported a sensory innervation around osseointegrated implants, either in the bone-implant interface or peri-implant epithelium, which expressed a particular innervation pattern. Compared to unloaded implants or extraction sites without implantation, a significant higher density of nerve fibres around loaded dental implants was confirmed. CONCLUSIONS To date, the published literature describes peri-implant innervation with a distinct pattern in hard and soft tissues. Implant loading seems to increase the density of nerve fibres in peri-implant tissues, with insufficient evidence to distinguish between the innervation patterns following immediate and delayed implant placement and loading protocols. Variability in study design and loading protocols across the literature and a high risk of bias in the studies included may contribute to this inconsistency, revealing the need for more uniformity in reporting, randomized controlled trials, longer observation periods and standardization of protocols.
Resumo:
DNA Barcoding (Hebert et al. 2003) has the potential to revolutionize the process of identifying and cataloguing biodiversity; however, significant controversy surrounds some of the proposed applications. In the seven years since DNA barcoding was introduced, the Web of Science records more than 600 studies that have weighed the pros and cons of this procedure. Unfortunately, the scientific community has been unable to come to any consensus on what threshold to use to differentiate species or even whether the barcoding region provides enough information to serve as an accurate species identification tool. The purpose of my thesis is to analyze mitochondrial DNA (mtDNA) barcoding’s potential to identify known species and provide a well-resolved phylogeny for the New Zealand cicada genus Kikihia. In order to do this, I created a phylogenetic tree for species in the genus Kikihia based solely on the barcoding region and compared it to a phylogeny previously created by Marshall et al. (2008) that benefits from information from other mtDNA and nuclear genes as well as species-specific song data. I determined how well the barcoding region delimits species that have been recognized based on morphology and song. In addition, I looked at the effect of sampling on the success of barcoding studies. I analyzed subsets of a larger, more densely sampled dataset for the Kikihia Muta Group to determine which aspects of my sampling strategy led to the most accurate identifications. Since DNA barcoding would by definition have problems in diagnosing hybrid individuals, I studied two species (K. “murihikua” and K. angusta) that are known to hybridize. Individuals that were not obvious hybrids (determined by morphology) were selected for the case study. Phylogenetic analysis of the barcoding region revealed insights into the reasons these two species could not be successfully differentiated using barcoding alone.
Resumo:
This study examined the effectiveness of discovery learning and direct instruction in a diverse second grade classroom. An assessment test and transfer task were given to students to examine which method of instruction enabled the students to grasp the content of a science lesson to a greater extent. Results demonstrated that students in the direct instruction group scored higher on the assessment test and completed the transfer task at a faster pace; however, this was not statistically significant. Results also suggest that a mixture of instructional styles would serve to effectively disseminate information, as well as motivate students to learn.