642 resultados para Credibility
Resumo:
This paper asks how World Trade Organization (WTO) panels and the Appellate Body (AB) take public international law (PIL) into account when interpreting WTO rules as a part of international economic law (IEL). Splendid isolation of the latter is not new; indeed it is intended by the negotiators of the Understanding on the Settlement of Disputes (DSU). At the same time, the Vienna Convention on the Law of Treaties (VCLT) is quite clear when it provides the general rules and the supplementary means of treaty interpretation. Despite such mandatory guidance, WTO adjudicators (when given a choice and assuming they see the conflict) prefer deference to WTO law over deference to Vienna and take a dogmatic way out of interpretation quandaries. The AB and panels make abundant reference to Vienna, though less so to substantive PIL. Often times, however, they do so simply in order to buttress their findings of violations of WTO rules. Perhaps tellingly, however, none of the reports in EC – Seals contains even a single mention of VCLT, despite numerous references to international standards addressing indigenous rights and animal welfare. In the longer term, and absent a breakthrough on the negotiation front, this pattern of carefully eschewing international treaty law and using PIL just for the sake of convenience could have serious consequences for the credibility and acceptance of the multilateral trading system. Following the adage ‘negotiate or litigate’ recourse to WTO dispute settlement increases when governments are less ready to make treaty commitments commensurate with the challenges of globalisation. This is true even for ‘societal choice’ cases on the margins of classic trade disputes. We will argue here that it is precisely for cases such as these that VCLT and PIL should be used more systematically by panels and the AB. Failing that, instead of building bridges for more coherent international regulation, WTO adjudicators could burn those same bridges which the DSU interpretation margin leaves open for accomplishing their job which is to find a ‘positive solution’. Worse, judicial incoherence could return to WTO dispute settlement like a boomerang and damage the credibility and thus the level of acceptance of the multilateral trading system per se.
Resumo:
OBJECTIVE To investigate the planning of subgroup analyses in protocols of randomised controlled trials and the agreement with corresponding full journal publications. DESIGN Cohort of protocols of randomised controlled trial and subsequent full journal publications. SETTING Six research ethics committees in Switzerland, Germany, and Canada. DATA SOURCES 894 protocols of randomised controlled trial involving patients approved by participating research ethics committees between 2000 and 2003 and 515 subsequent full journal publications. RESULTS Of 894 protocols of randomised controlled trials, 252 (28.2%) included one or more planned subgroup analyses. Of those, 17 (6.7%) provided a clear hypothesis for at least one subgroup analysis, 10 (4.0%) anticipated the direction of a subgroup effect, and 87 (34.5%) planned a statistical test for interaction. Industry sponsored trials more often planned subgroup analyses compared with investigator sponsored trials (195/551 (35.4%) v 57/343 (16.6%), P<0.001). Of 515 identified journal publications, 246 (47.8%) reported at least one subgroup analysis. In 81 (32.9%) of the 246 publications reporting subgroup analyses, authors stated that subgroup analyses were prespecified, but this was not supported by 28 (34.6%) corresponding protocols. In 86 publications, authors claimed a subgroup effect, but only 36 (41.9%) corresponding protocols reported a planned subgroup analysis. CONCLUSIONS Subgroup analyses are insufficiently described in the protocols of randomised controlled trials submitted to research ethics committees, and investigators rarely specify the anticipated direction of subgroup effects. More than one third of statements in publications of randomised controlled trials about subgroup prespecification had no documentation in the corresponding protocols. Definitive judgments regarding credibility of claimed subgroup effects are not possible without access to protocols and analysis plans of randomised controlled trials.
Resumo:
Although brand authenticity is gaining increasing interest in academia and managerial practice, empirical studies on its contribution to the branding literature are still limited. The authors therefore conceptually and empirically examine the emergence and outcomes of perceived brand authenticity (PBA). A prior multi-phase scale development process resulted in a 17-item PBA scale to measure its four dimensions of credibility, integrity, symbolism, and longevity. Brand authenticity perceptions are influenced by indexical, existential, and iconic cues, whereby the latter’s influence is moderated by consumers’ level of marketing skepticism. Further, PBA increases emotional brand attachment. This relationship is particularly strong for consumers with a high level of self-authenticity. In addition, PBA effects are stronger in a North American market context compared to a European context.
Resumo:
This article develops an integrative framework of the concept of perceived brand authenticity (PBA) and sheds light on PBA’s (1) measurement, (2) drivers, (3) consequences, as well as (4) an underlying process of its effects and (5) boundary conditions. A multi-phase scale development process resulted in a 15-item PBA scale to measure its four dimensions of credibility, integrity, symbolism, and continuity. PBA is influenced by indexical, existential, and iconic cues, whereby the latter’s influence is moderated by consumers’ level of marketing skepticism. Results also suggest that PBA drives brand choice likelihood through self-congruence for consumers high in self-authenticity.
Resumo:
Although brand authenticity is gaining increasing interest in consumer behavior research and managerial practice, literature on its measurement and contribution to branding theory is still limited. This article develops an integrative framework of the concept of brand authenticity and reports the development and validation of a scale measuring consumers' perceived brand authenticity (PBA). A multi-phase scale development process resulted in a 15-item PBA scale measuring four dimensions: credibility, integrity, symbolism, and continuity. This scale is reliable across different brands and cultural contexts. We find that brand authenticity perceptions are influenced by indexical, existential, and iconic cues, whereby some of the latters' influence is moderated by consumers' level of marketing skepticism. Results also suggest that PBA increases emotional brand attachment and word-of-mouth, and that it drives brand choice likelihood through self-congruence for consumers high in self-authenticity.
Resumo:
Although brand authenticity is gaining increasing interest in consumer behavior research and managerial practice, literature on its measurement and contribution to branding theory is still limited. This article develops an integrative framework of the concept of brand authenticity and reports the development and validation of a scale measuring consumers' perceived brand authenticity (PBA). A multi-phase scale development process resulted in a 15-item PBA scale measuring four dimensions: credibility, integrity, symbolism, and continuity. This scale is reliable across different brands and cultural contexts. We find that brand authenticity perceptions are influenced by indexical, existential, and iconic cues, whereby some of the latters' influence is moderated by consumers' level of marketing skepticism. Results also suggest that PBA increases emotional brand attachment and word-of-mouth, and that it drives brand choice likelihood through self-congruence for consumers high in self-authenticity.
Resumo:
A search is conducted for non-resonant new phenomena in dielectron and dimuon final states, originating from either contact interactions or large extra spatial dimensions. The LHC 2012 proton–proton collision dataset recorded by the ATLAS detector is used, corresponding to 20 fb−1 at √ s = 8 TeV. The dilepton invariant mass spectrum is a discriminating variable in both searches, with the contact interaction search additionally utilizing the dilepton forward-backward asymmetry. No significant deviations from the Standard Model expectation are observed. Lower limits are set on the ℓℓqq contact interaction scale ʌ between 15.4 TeVand 26.3 TeV, at the 95%credibility level. For large extra spatial dimensions, lower limits are set on the string scale MS between 3.2 TeV to 5.0 TeV.
Resumo:
The Mediterranean region has been identified as a global warming hotspot, where future climate impacts are expected to have significant consequences on societal and ecosystem well-being. To put ongoing trends of summer climate into the context of past natural variability, we reconstructed climate from maximum latewood density (MXD) measurements of Pinus heldreichii (1521–2010) and latewood width (LWW) of Pinus nigra (1617–2010) on Mt. Olympus, Greece. Previous research in the northeastern Mediterranean has primarily focused on inter-annual variability, omitting any low-frequency trends. The present study utilizes methods capable of retaining climatically driven long-term behavior of tree growth. The LWW chronology corresponds closely to early summer moisture variability (May–July, r = 0.65, p < 0.001, 1950–2010), whereas the MXD-chronology relates mainly to late summer warmth (July–September, r = 0.64, p < 0.001; 1899–2010). The chronologies show opposing patterns of decadal variability over the twentieth century (r = −0.68, p < 0.001) and confirm the importance of the summer North Atlantic Oscillation (sNAO) for summer climate in the northeastern Mediterranean, with positive sNAO phases inducing cold anomalies and enhanced cloudiness and precipitation. The combined reconstructions document the late twentieth—early twenty-first century warming and drying trend, but indicate generally drier early summer and cooler late summer conditions in the period ~1700–1900 CE. Our findings suggest a potential decoupling between twentieth century atmospheric circulation patterns and pre-industrial climate variability. Furthermore, the range of natural climate variability stretches beyond summer moisture availabilityobserved in recent decades and thus lends credibility to the significant drying trends projected for this region in current Earth System Model simulations.
Resumo:
When firms contribute to open source projects, they in fact invest into public goods which may be used by everyone, even by their competitors. This seemingly paradoxical behavior can be explained by the model of private-collective innovation where private investors participate in collective action. Previous literature has shown that companies benefit through the production process providing them with unique incentives such as learning and reputation effects. By contributing to open source projects firms are able to build a network of external individuals and organizations participating in the creation and development of the software. As will be shown in this doctoral dissertation firm-sponsored communities involve the formation of interorganizational relationships which eventually may lead to a source of sustained competitive advantage. However, managing a largely independent open source community is a challenging balancing act between exertion of control to appropriate value creation, and openness in order to gain and preserve credibility and motivate external contributions. Therefore, this dissertation consisting of an introductory chapter and three separate research papers analyzes characteristics of firm-driven open source communities, finds reasons why and mechanisms by which companies facilitate the creation of such networks, and shows how firms can benefit most from their communities.
Resumo:
OBJECTIVES The main objective was to assess the credibility of the evidence using Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) in oral health systematic reviews on the Cochrane Database of Systematic Reviews (CDSR) and elsewhere. STUDY DESIGN AND SETTING Systematic Reviews or meta-analyses (January 2008-December 2013) from 14 high impact general dental and specialty dental journals and the Cochrane Database of Systematic Reviews were screened for meta-analyses. Data was collected at the systematic review, meta-analysis and trial level. Two reviewers applied and agreed on the GRADE rating for the selected meta-analyses. RESULTS From the 510 systematic reviews initially identified 91 reviews (41 Cochrane and 50 non-Cochrane) were eligible for inclusion. The quality of evidence was high in 2% and moderate in 18% of the included meta-analyses with no difference between Cochrane and non-Cochrane reviews, journal impact factor or year of publication. The most common domains prompting downgrading of the evidence were study limitations (risk of bias) and imprecision (risk of play of chance). CONCLUSION The quality of the evidence in oral health assessed using GRADE is predominantly low or very low suggesting a pressing need for more randomised clinical trials and other studies of higher quality in order to inform clinical decisions thereby reducing the risk of instituting potentially ineffective and/or harmful therapies.
Resumo:
PURPOSE To assess the visual performance of Swiss hand surgeons in an environment similar to their workplace. The influence of Galilean (lenses only) and Keplerian loupes (lenses and prisms), the surgeon's age, and the credibility of a self-assessment of his or her own optical performance were evaluated. METHODS Sixty-three hand surgeons between 29 and 68 years of age with 70 loupes were included in the study (Galilean n = 35, Keplerian n = 35). The visual performance as surgeons was self-assessed on a modified visual analog scale and objectively measured with miniaturized visual tests in a simulated clinical setting. We evaluated the influence of the optical device by comparing Galilean and Keplerian loupes and the influence of the surgeon's age by comparing 2 subgroups: < 40 years and ≥ 40 years. RESULTS The correlation between self-assessment and objective visual performance was weak, with a Spearman rank correlation coefficient of 0.25. The near visual acuity with habitual optical aids showed considerable variability, with a range of 300% in the dimension of the smallest detected structure. The near visual acuity was significantly lower in the older group ≥ 40 years than in the younger group < 40 years with both Galilean and Keplerian loupes. Keplerian loupes allowed a significantly higher visual performance than Galilean loupes. Surgeons 40 years or older using Keplerian loupes had a similar visual acuity to surgeons younger than 40 years with Galilean loupes. CONCLUSIONS The magnified near vision of hand surgeons showed an important individual variability. Self-assessment was not a valuable instrument for surgeons to estimate their own near vision. Hand surgeons older than 40 years should use higher magnification loupes. TYPE OF STUDY/LEVEL OF EVIDENCE Diagnostic III.
Resumo:
Missing outcome data are common in clinical trials and despite a well-designed study protocol, some of the randomized participants may leave the trial early without providing any or all of the data, or may be excluded after randomization. Premature discontinuation causes loss of information, potentially resulting in attrition bias leading to problems during interpretation of trial findings. The causes of information loss in a trial, known as mechanisms of missingness, may influence the credibility of the trial results. Analysis of trials with missing outcome data should ideally be handled with intention to treat (ITT) rather than per protocol (PP) analysis. However, true ITT analysis requires appropriate assumptions and imputation of missing data. Using a worked example from a published dental study, we highlight the key issues associated with missing outcome data in clinical trials, describe the most recognized approaches to handling missing outcome data, and explain the principles of ITT and PP analysis.
Resumo:
BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.
Resumo:
Purpose. To investigate and understand the illness experiences of patients and their family members living with congestive heart failure (CHF). ^ Design. Focused ethnographic design. ^ Setting. One outpatient cardiology clinic, two outpatient heart failure clinics, and informants' homes in a large metropolitan city located in southeast Texas. ^ Sample. A purposeful sampling technique was used to select a sample of 28 informants. The following somewhat overlapping, sampling strategies were used to implement the purposeful method: criterion; typical case; operational construct; maximum variation; atypical case; opportunistic; and confirming and disconfirming case sampling. ^ Methods. Naturalistic inquiry consisted of data collected from observations, participant observations, and interviews. Open-ended semi-structured illness narrative interviews included questions designed to elicit informant's explanatory models of the illness, which served as a synthesizing framework for the analysis. A thematic analysis process was conducted through domain analysis and construction of data into themes and sub-themes. Credibility was enhanced through informant verification and a process of peer debriefing. ^ Findings. Thematic analysis revealed that patients and their family members living with CHF experience a process of disruption, incoherence, and reconciling. Reconciling emerged as the salient experience described by informants. Sub-themes of reconciling that emerged from the analysis included: struggling; participating in partnerships; finding purpose and meaning in the illness experience; and surrendering. ^ Conclusions. Understanding the experiences described in this study allows for a better understanding of living with CHF in everyday life. Findings from this study suggest that the experience of living with CHF entails more than the medical story can tell. It is important for nurses and other providers to understand the experiences of this population in order to develop appropriate treatment plans in a successful practitioner-patient partnership. ^
Resumo:
Economic models of crime and punishment implicitly assume that the government can credibly commit to the fines, sentences, and apprehension rates it has chosen. We study the government's problem when credibility is an issue. We find that several of the standard predictions of the economic model of crime and punishment are robust to commitment, but that credibility may in some cases result in lower apprehension rates, and hence a higher crime rate, compared to the static version of the model.