642 resultados para Credibility
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
In patients with HIV-1 infection who are starting combination antiretroviral therapy (ART), the incidence of immune reconstitution inflammatory syndrome (IRIS) is not well defined. We did a meta-analysis to establish the incidence and lethality of the syndrome in patients with a range of previously diagnosed opportunistic infections, and examined the relation between occurrence and the degree of immunodeficiency. Systematic review identified 54 cohort studies of 13 103 patients starting ART, of whom 1699 developed IRIS. We calculated pooled cumulative incidences with 95% credibility intervals (CrI) by Bayesian methods and did a random-effects metaregression to analyse the relation between CD4 cell count and incidence of IRIS. In patients with previously diagnosed AIDS-defining illnesses, IRIS developed in 37.7% (95% CrI 26.6-49.4) of those with cytomegalovirus retinitis, 19.5% (6.7-44.8) of those with cryptococcal meningitis, 15.7% (9.7-24.5) of those with tuberculosis, 16.7% (2.3-50.7) of those with progressive multifocal leukoencephalopathy, and 6.4% (1.2-24.7) of those with Kaposi's sarcoma, and 12.2% (6.8-19.6) of those with herpes zoster. 16.1% (11.1-22.9) of unselected patients starting ART developed any type of IRIS. 4.5% (2.1-8.6) of patients with any type of IRIS died, 3.2% (0.7-9.2) of those with tuberculosis-associated IRIS died, and 20.8% (5.0-52.7) of those with cryptococcal meningitis died. Metaregression analyses showed that the risk of IRIS is associated with CD4 cell count at the start of ART, with a high risk in patients with fewer than 50 cells per microL. Occurrence of IRIS might therefore be reduced by initiation of ART before immunodeficiency becomes advanced.
Resumo:
Background Osteoarthritis is the most common form of joint disorder and a leading cause of pain and physical disability. Observational studies suggested a benefit for joint lavage, but recent, sham-controlled trials yielded conflicting results, suggesting joint lavage not to be effective. Objectives To compare joint lavage with sham intervention, placebo or non-intervention control in terms of effects on pain, function and safety outcomes in patients with knee osteoarthritis. Search methods We searched CENTRAL, MEDLINE, EMBASE, and CINAHL up to 3 August 2009, checked conference proceedings, reference lists, and contacted authors. Selection criteria We included studies if they were randomised or quasi-randomised trials that compared arthroscopic and non-arthroscopic joint lavage with a control intervention in patients with osteoarthritis of the knee. We did not apply any language restrictions. Data collection and analysis Two independent review authors extracted data using standardised forms. We contacted investigators to obtain missing outcome information. We calculated standardised mean differences (SMDs) for pain and function, and risk ratios for safety outcomes. We combined trials using inverse-variance random-effects meta-analysis. Main results We included seven trials with 567 patients. Three trials examined arthroscopic joint lavage, two non-arthroscopic joint lavage and two tidal irrigation. The methodological quality and the quality of reporting was poor and we identified a moderate to large degree of heterogeneity among the trials (I2 = 65%). We found little evidence for a benefit of joint lavage in terms of pain relief at three months (SMD -0.11, 95% CI -0.42 to 0.21), corresponding to a difference in pain scores between joint lavage and control of 0.3 cm on a 10-cm visual analogue scale (VAS). Results for improvement in function at three months were similar (SMD -0.10, 95% CI -0.30 to 0.11), corresponding to a difference in function scores between joint lavage and control of 0.2 cm on a WOMAC disability sub-scale from 0 to 10. For pain, estimates of effect sizes varied to some degree depending on the type of lavage, but this variation was likely to be explained by differences in the credibility of control interventions: trials using sham interventions to closely mimic the process of joint lavage showed a null-effect. Reporting on adverse events and drop out rates was unsatisfactory, and we were unable to draw conclusions for these secondary outcomes. Authors' conclusions Joint lavage does not result in a relevant benefit for patients with knee osteoarthritis in terms of pain relief or improvement of function.
Resumo:
Context Treatment of neurogenic lower urinary tract dysfunction (LUTD) is a challenge, because conventional therapies often fail. Sacral neuromodulation (SNM) has become a well-established therapy for refractory non-neurogenic LUTD, but its value in patients with a neurologic cause is unclear. Objective To assess the efficacy and safety of SNM for neurogenic LUTD. Evidence acquisition Studies were identified by electronic search of PubMed, EMBASE, and ScienceDirect (on 15 April 2010) and hand search of reference lists and review articles. SNM articles were included if they reported on efficacy and/or safety of tested and/or permanently implanted patients suffering from neurogenic LUTD. Two reviewers independently selected studies and extracted data. Study estimates were pooled using Bayesian random-effects meta-analysis. Evidence synthesis Of the 26 independent studies (357 patients) included, the evidence level ranged from 2b to 4 according to the Oxford Centre for Evidence-Based Medicine. Half (n = 13) of the included studies reported data on both test phase and permanent SNM; the remaining studies were confined to test phase (n = 4) or permanent SNM (n = 9). The pooled success rate was 68% for the test phase (95% credibility interval [CrI], 50–87) and 92% (95% CrI, 81–98%) for permanent SNM, with a mean follow-up of 26 mo. The pooled adverse event rate was 0% (95% CrI, 0–2%) for the test phase and 24% (95% CrI, 6–48%) for permanent SNM. Conclusions There is evidence indicating that SNM may be effective and safe for the treatment of patients with neurogenic LUTD. However, the number of investigated patients is low with high between-study heterogeneity, and there is a lack of randomised, controlled trials. Thus, well-designed, adequately powered studies are urgently needed before more widespread use of SNM for neurogenic LUTD can be recommended.
Resumo:
During the 1870s and 1880s, several British women writers traveled by transcontinental railroad across the American West via Salt Lake City, Utah, the capital of the Church of Jesus Christ of Latter-day Saints, or Mormons. These women subsequently wrote books about their travels for a home audience with a taste for adventures in the American West, and particularly for accounts of Mormon plural marriage, which was sanctioned by the Church before 1890. "The plight of the Mormon woman," a prominent social reform and literary theme of the period, situated Mormon women at the center of popular representations of Utah during the second half of the nineteenth century. "The Mormon question" thus lends itself to an analysis of how a stereotyped subaltern group was represented by elite British travelers. These residents of western American territories, however, differed in important respects from the typical subaltern subjects discussed by Victorian travelers. These white, upwardly mobile, and articulate Mormon plural wives attempted to influence observers' representations of them through a variety of narrative strategies. Both British women travel writers and Mormon women wrote from the margins of power and credibility, and as interpreters of the Mormon scene were concerned to established their representational authority.
Resumo:
This study examines the effects of the source of whistle-blowing allegations and potential for allegations to trigger concerns about reputation threats on Chief Audit Executives’ handling of whistle-blowing allegations. The participants for this study, 79 Chief Audit Executives (CAEs) and deputy CAEs, evaluated whistle-blowing reports related to financial reporting malfeasance that were received from either an anonymous or a non-anonymous source. The whistle-blowing reports alleged that the wrongdoing resulted from either the exploitation of substantial weaknesses in internal controls (suggesting higher responsibility of the CAE and internal audit) or the circumvention of internal controls (suggesting lower responsibility of the CAE or internal audit). Findings indicate that CAEs believe anonymous whistle-blowing reports to be significantly less credible than non-anonymous reports. Although CAEs assessed lower credibility ratings for the reports alleging wrongdoing by the exploitation of substantial weaknesses in internal controls, they allocated more resources to investigating these allegations.
Resumo:
Simulation is an important resource for researchers in diverse fields. However, many researchers have found flaws in the methodology of published simulation studies and have described the state of the simulation community as being in a crisis of credibility. This work describes the project of the Simulation Automation Framework for Experiments (SAFE), which addresses the issues that undermine credibility by automating the workflow in the execution of simulation studies. Automation reduces the number of opportunities for users to introduce error in the scientific process thereby improvingthe credibility of the final results. Automation also eases the job of simulation users and allows them to focus on the design of models and the analysis of results rather than on the complexities of the workflow.
Resumo:
The Simulation Automation Framework for Experiments (SAFE) is a project created to raise the level of abstraction in network simulation tools and thereby address issues that undermine credibility. SAFE incorporates best practices in network simulationto automate the experimental process and to guide users in the development of sound scientific studies using the popular ns-3 network simulator. My contributions to the SAFE project: the design of two XML-based languages called NEDL (ns-3 Experiment Description Language) and NSTL (ns-3 Script Templating Language), which facilitate the description of experiments and network simulationmodels, respectively. The languages provide a foundation for the construction of better interfaces between the user and the ns-3 simulator. They also provide input to a mechanism which automates the execution of network simulation experiments. Additionally,this thesis demonstrates that one can develop tools to generate ns-3 scripts in Python or C++ automatically from NSTL model descriptions.
Resumo:
Objective To analyse the available evidence on cardiovascular safety of non-steroidal anti-inflammatory drugs. Design Network meta-analysis. Data sources Bibliographic databases, conference proceedings, study registers, the Food and Drug Administration website, reference lists of relevant articles, and reports citing relevant articles through the Science Citation Index (last update July 2009). Manufacturers of celecoxib and lumiracoxib provided additional data. Study selection All large scale randomised controlled trials comparing any non-steroidal anti-inflammatory drug with other non-steroidal anti-inflammatory drugs or placebo. Two investigators independently assessed eligibility. Data extraction The primary outcome was myocardial infarction. Secondary outcomes included stroke, death from cardiovascular disease, and death from any cause. Two investigators independently extracted data. Data synthesis 31 trials in 116 429 patients with more than 115 000 patient years of follow-up were included. Patients were allocated to naproxen, ibuprofen, diclofenac, celecoxib, etoricoxib, rofecoxib, lumiracoxib, or placebo. Compared with placebo, rofecoxib was associated with the highest risk of myocardial infarction (rate ratio 2.12, 95% credibility interval 1.26 to 3.56), followed by lumiracoxib (2.00, 0.71 to 6.21). Ibuprofen was associated with the highest risk of stroke (3.36, 1.00 to 11.6), followed by diclofenac (2.86, 1.09 to 8.36). Etoricoxib (4.07, 1.23 to 15.7) and diclofenac (3.98, 1.48 to 12.7) were associated with the highest risk of cardiovascular death. Conclusions Although uncertainty remains, little evidence exists to suggest that any of the investigated drugs are safe in cardiovascular terms. Naproxen seemed least harmful. Cardiovascular risk needs to be taken into account when prescribing any non-steroidal anti-inflammatory drug.
Resumo:
SETTING: Kinshasa Province, Democratic Republic of Congo. OBJECTIVE: To identify and validate register-based indicators of acid-fast bacilli (AFB) microscopy quality. DESIGN: Selection of laboratories based on reliability and variation in routine smear rechecking results. Calculation of relative sensitivity (RS) compared to recheckers and its correlation coefficient (R) with candidate indicators based on a fully probabilistic analysis incorporating vague prior information using WinBUGS. RESULTS: The proportion of positive follow-up smears correlated well (median R 0.81, 95% credibility interval [CI] 0.58-0.93), and the proportion of first smear-positive cases fairly (median R 0.70, 95% CI 0.38-0.89) with RS. The proportions of both positive suspect and low positive case smears showed poor correlations (median R 0.27 and -0.22, respectively, with ranges including zero). CONCLUSIONS: The proportion of positives in follow-up smears is the most promising indicator of AFB smear sensitivity, while the proportion of positive suspects may be more indicative of accessibility and suspect selection. Both can be obtained from simple reports, and should be used for internal and external monitoring and as guidance for supervision. As proportion of low positive suspect smears and consistency within case series are more difficult to interpret, they should be used only on-site by laboratory professionals. All indicators require more research to define their optimal range in various settings.
Resumo:
Objective To compare the effectiveness and safety of three types of stents (sirolimus eluting, paclitaxel eluting, and bare metal) in people with and without diabetes mellitus. Design Collaborative network meta-analysis. Data sources Electronic databases (Medline, Embase, the Cochrane Central Register of Controlled Trials), relevant websites, reference lists, conference abstracts, reviews, book chapters, and proceedings of advisory panels for the US Food and Drug Administration. Manufacturers and trialists provided additional data. Review methods Network meta-analysis with a mixed treatment comparison method to combine direct within trial comparisons between stents with indirect evidence from other trials while maintaining randomisation. Overall mortality was the primary safety end point, target lesion revascularisation the effectiveness end point. Results 35 trials in 3852 people with diabetes and 10 947 people without diabetes contributed to the analyses. Inconsistency of the network was substantial for overall mortality in people with diabetes and seemed to be related to the duration of dual antiplatelet therapy (P value for interaction 0.02). Restricting the analysis to trials with a duration of dual antiplatelet therapy of six months or more, inconsistency was reduced considerably and hazard ratios for overall mortality were near one for all comparisons in people with diabetes: sirolimus eluting stents compared with bare metal stents 0.88 (95% credibility interval 0.55 to 1.30), paclitaxel eluting stents compared with bare metal stents 0.91 (0.60 to 1.38), and sirolimus eluting stents compared with paclitaxel eluting stents 0.95 (0.63 to 1.43). In people without diabetes, hazard ratios were unaffected by the restriction. Both drug eluting stents were associated with a decrease in revascularisation rates compared with bare metal stents in people both with and without diabetes. Conclusion In trials that specified a duration of dual antiplatelet therapy of six months or more after stent implantation, drug eluting stents seemed safe and effective in people both with and without diabetes.
Resumo:
BACKGROUND: High intercoder reliability (ICR) is required in qualitative content analysis for assuring quality when more than one coder is involved in data analysis. The literature is short of standardized procedures for ICR procedures in qualitative content analysis. OBJECTIVE: To illustrate how ICR assessment can be used to improve codings in qualitative content analysis. METHODS: Key steps of the procedure are presented, drawing on data from a qualitative study on patients' perspectives on low back pain. RESULTS: First, a coding scheme was developed using a comprehensive inductive and deductive approach. Second, 10 transcripts were coded independently by two researchers, and ICR was calculated. A resulting kappa value of .67 can be regarded as satisfactory to solid. Moreover, varying agreement rates helped to identify problems in the coding scheme. Low agreement rates, for instance, indicated that respective codes were defined too broadly and would need clarification. In a third step, the results of the analysis were used to improve the coding scheme, leading to consistent and high-quality results. DISCUSSION: The quantitative approach of ICR assessment is a viable instrument for quality assurance in qualitative content analysis. Kappa values and close inspection of agreement rates help to estimate and increase quality of codings. This approach facilitates good practice in coding and enhances credibility of analysis, especially when large samples are interviewed, different coders are involved, and quantitative results are presented.
Resumo:
Corporate Social Responsibility (CSR) addresses the responsibility of companies for their impacts on society. The concept of strategic CSR is becoming increasingly mainstreamed in the forest industry, but there is, however, little consensus on the definition and implementation of CSR. The objective of this research is to build knowledge on the characteristics of CSR and to provide insights on the emerging trend to increase the credibility and legitimacy of CSR through standardization. The study explores how the sustainability managers of European and North American forest companies perceive CSR and the recently released ISO 26000 guidance standard on social responsibility. The conclusions were drawn from an analysis of two data sets; multivariate survey data based on one subset of 30 European and 13 North American responses, and data obtained through in-depth interviewing of 10 sustainability managers that volunteered for an hour long phone discussion about social responsibility practices at their company. The analysis concluded that there are no major differences in the characteristics of cross-Atlantic CSR. Hence, the results were consistent with previous research that suggests that CSR is a case- and company-specific concept. Regarding the components of CSR, environmental issues and organizational governance were key priorities in both regions. Consumer issues, human rights, and financial issues were among the least addressed categories. The study reveals that there are varying perceptions on the ISO 26000 guidance standard, both positive and negative. Moreover, sustainability managers of European and North American forest companies are still uncertain regarding the applicability of the ISO 26000 guidance standard to the forest industry. This study is among the first to provide a preliminary review of the practical implications of the ISO 26000 standard in the forest sector. The results may be utilized by sustainability managers interested in the best practices on CSR, and also by a variety of forest industrial stakeholders interested in the practical outcomes of the long-lasting CSR debate.
Resumo:
Electrospinning (ES) can readily produce polymer fibers with cross-sectional dimensions ranging from tens of nanometers to tens of microns. Qualitative estimates of surface area coverage are rather intuitive. However, quantitative analytical and numerical methods for predicting surface coverage during ES have not been covered in sufficient depth to be applied in the design of novel materials, surfaces, and devices from ES fibers. This article presents a modeling approach to ES surface coverage where an analytical model is derived for use in quantitative prediction of surface coverage of ES fibers. The analytical model is used to predict the diameter of circular deposition areas of constant field strength and constant electrostatic force. Experimental results of polyvinyl alcohol fibers are reported and compared to numerical models to supplement the analytical model derived. The analytical model provides scientists and engineers a method for estimating surface area coverage. Both applied voltage and capillary-to-collection-plate separation are treated as independent variables for the analysis. The electric field produced by the ES process was modeled using COMSOL Multiphysics software to determine a correlation between the applied field strength and the size of the deposition area of the ES fibers. MATLAB scripts were utilized to combine the numerical COMSOL results with derived analytical equations. Experimental results reinforce the parametric trends produced via modeling and lend credibility to the use of modeling techniques for the qualitative prediction of surface area coverage from ES. (Copyright: 2014 American Vacuum Society.)
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .