896 resultados para Trustworthiness judgment
Resumo:
Increasing demand for marketing accountability requires an efficient allocation of marketing expenditures. Managers who know the elasticity of their marketing instruments can allocate their budgets optimally. Meta-analyses offer a basis for deriving benchmark elasticities for advertising. Although they provide a variety of valuable insights, a major shortcoming of prior meta-analyses is that they report only generalized results as the disaggregated raw data are not made available. This problem is highly relevant because coding of empirical studies, at least to a certain extent, involves subjective judgment. For this reason, meta-studies would be more valuable if researchers and practitioners had access to disaggregated data allowing them to conduct further analyses of individual, e.g., product-level-specific, interests. We are the first to address this gap by providing (1) an advertising elasticity database (AED) and (2) empirical generalizations about advertising elasticities and their determinants. Our findings indicate that the average current-period advertising elasticity is 0.09, which is substantially smaller than the value 0f 0.12 that was recently reported by Sethuraman, Tellis, and Briesch (2011). Furthermore, our meta-analysis reveals a wide range of significant determinants of advertising elasticity. For example, we find that advertising elasticities are higher (i) for hedonic and experience goods than for other goods; (ii) for new than for established goods; (iii) when advertising is measured in gross rating points (GRP) instead of absolute terms; and (iv) when the lagged dependent or lagged advertising variable is omitted.
Resumo:
On October 10, 2013, the Chamber of the European Court of Human Rights (ECtHR) handed down a judgment (Delfi v. Estonia) condoning Estonia for a law which, as interpreted, held a news portal liable for the defamatory comments of its users. Amongst the considerations that led the Court to find no violation of freedom of expression in this particular case were, above all, the inadequacy of the automatic screening system adopted by the website and the users’ option to post their comments anonymously (i.e. without need for prior registration via email), which in the Court’s view rendered the protection conferred to the injured party via direct legal action against the authors of the comments ineffective. Drawing on the implications of this (not yet final) ruling, this paper discusses a few questions that the tension between the risk of wrongful use of information and the right to anonymity generates for the development of Internet communication, and examines the role that intermediary liability legislation can play to manage this tension.
Resumo:
New tools for editing of digital images, music and films have opened up new possibilities to enable wider circles of society to engage in ’artistic’ activities of different qualities. User-generated content has produced a plethora of new forms of artistic expression. One type of user-generated content is the mashup. Mashups are compositions that combine existing works (often) protected by copyright and transform them into new original creations. The European legislative framework has not yet reacted to the copyright problems provoked by mashups. Neither under the US fair use doctrine, nor under the strict corset of limitations and exceptions in Art 5 (2)-(3) of the Copyright Directive (2001/29/EC) have mashups found room to develop in a safe legal environment. The contribution analyzes the current European legal framework and identifies its insufficiencies with regard to enabling a legal mashup culture. By comparison with the US fair use approach, in particular the parody defense, a recent CJEU judgment serves as a comparative example. Finally, an attempt is made to suggest solutions for the European legislator, based on the policy proposals of the EU Commission’s “Digital Agenda” and more recent policy documents (e.g. “On Content in the Digital Market”, “Licenses for Europe”). In this context, a distinction is made between non-commercial mashup artists and the emerging commercial mashup scene.
Resumo:
BACKGROUND The objective of this study was to compare transtelephonic ECG every 2 days and serial 7-day Holter as two methods of follow-up after atrial fibrillation (AF) catheter ablation for the judgment of ablation success. Patients with highly symptomatic AF are increasingly treated with catheter ablation. Several methods of follow-up have been described, and judgment on ablation success often relies on patients' symptoms. However, the optimal follow-up strategy objectively detecting most of the AF recurrences is yet unclear. METHODS Thirty patients with highly symptomatic AF were selected for circumferential pulmonary vein ablation. During follow-up, a transtelephonic ECG was transmitted once every 2 days for half a year. Additionally, a 7-day Holter was recorded preablation, after ablation, after 3 and 6 months, respectively. With both, procedures symptoms and actual rhythm were correlated thoroughly. RESULTS A total of 2,600 transtelephonic ECGs were collected with 216 of them showing AF. 25% of those episodes were asymptomatic. On a Kaplan-Meier analysis 45% of the patients with paroxysmal AF were still in continuous SR after 6 months. Simulating a follow-up based on symptomatic recurrences only, that number would have increased to 70%. Using serial 7-day ECG, 113 Holter with over 18,900 hours of ECG recording were acquired. After 6 months the percentage of patients classified as free from AF was 50%. Of the patients with recurrences, 30-40% were completely asymptomatic. The percentage of asymptomatic AF episodes stepwise increased from 11% prior ablation to 53% 6 months after. CONCLUSIONS The success rate in terms of freedom from AF was 70% on a symptom-only-based follow-up; using serial 7-day Holter it decreased to 50% and on transtelephonic monitoring to 45%, respectively. Transtelephonic ECG and serial 7-day Holter were equally effective to objectively determine long-term success and to detect asymptomatic patients.
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
We investigated the role of horizontal body motion on the processing of numbers. We hypothesized that leftward self-motion leads to shifts in spatial attention and therefore facilitates the processing of small numbers, and vice versa, we expected that rightward self-motion facilitates the processing of large numbers. Participants were displaced by means of a motion platform during a parity judgment task. We found a systematic influence of self-motion direction on number processing, suggesting that the processing of numbers is intertwined with the processing of self-motion perception. The results differed from known spatial numerical compatibility effects in that self-motion exerted a differential influence on inner and outer numbers of the given interval. The results highlight the involvement of sensory body motion information in higher-order spatial cognition.
Resumo:
Purpose Skeletal-related events represent a substantial burden for patients with advanced cancer. Randomized, controlled studies suggested superiority of denosumab over zoledronic acid in the prevention of skeletal-related events in metastatic cancer patients, with a favorable safety profile. Experts gathered at the 2012 Skeletal Care Academy in Istanbul to bring forward practical recommendations, based on current evidence, for the use of denosumab in patients with bone metastases of lung cancer. Recommendations Based on current evidence, use of denosumab in lung cancer patients with confirmed bone metastases is recommended. It is important to note that clinical judgment should take into consideration the patient’s general performance status, overall prognosis, and live expectancy. Currently, the adverse event profile reported for denosumab includes hypocalcemia and infrequent occurrence of osteonecrosis of the jaw. Therefore, routine calcium and vitamin D supplementation, along with dental examination prior to denosumab initiation are recommended. There is no evidence for renal function impairment due to denosumab administration. At present, there is no rationale to discourage concomitant use of denosumab and surgery or radiotherapy.
Resumo:
We examined the relation between low self-esteem and depression using longitudinal data from a sample of 674 Mexican-origin early adolescents who were assessed at age 10 and 12 years. Results supported the vulnerability model, which states that low self-esteem is a prospective risk factor for depression. Moreover, results suggested that the vulnerability effect of low self-esteem is driven, for the most part, by general evaluations of worth (i.e., global self-esteem), rather than by domain-specific evaluations of academic competence, physical appearance, and competence in peer relationships. The only domain-specific self-evaluation that showed a prospective effect on depression was honesty-trustworthiness. The vulnerability effect of low self-esteem held for male and female adolescents, for adolescents born in the United States versus Mexico, and across different levels of pubertal status. Finally, the vulnerability effect held when we controlled for several theoretically relevant 3rd variables (i.e., social support, maternal depression, stressful events, and relational victimization) and for interactive effects between self-esteem and the 3rd variables. The present study contributes to an emerging understanding of the link between self-esteem and depression and provides much needed data on the antecedents of depression in ethnic minority populations
Resumo:
Distrust should automatically activate a "thinking the opposite". Thus, according to Schul, Mayo and Burnstein (2004), subjects detect antonyms of adjectives faster when confronted with untrustworthy rather than trustworthy faces. We conducted four experiments within their paradigm to test whether the response latency of detecting antonyms remains stable. We introduced the following changes: the paradigm was applied with and without an induction phase, faces were culturally adapted, the stimuli were presented according more to priming rules, and the canonicity of antonyms was controlled. Results show that the response latency of detecting antonyms is difficult to predict. Even if faces are culturally adapted and priming rules are applied more strictly, response latency depends on whether the induction phase is applied and on the canonicity of antonyms rather than on the trustworthiness of faces. In general, this paradigm seems not to be appropriate to test thinking the opposite under distrust.
Drug-related emergency department visits by elderly patients presenting with non-specific complaints
Resumo:
BACKGROUND Since drug-related emergency department (ED) visits are common among older adults, the objectives of our study were to identify the frequency of drug-related problems (DRPs) among patients presenting to the ED with non-specific complaints (NSC), such as generalized weakness and to evaluate responsible drug classes. METHODS Delayed type cross-sectional diagnostic study with a prospective 30 day follow-up in the ED of the University Hospital Basel, Switzerland. From May 2007 until April 2009, all non-trauma patients presenting to the ED with an Emergency Severity Index (ESI) of 2 or 3 were screened and included, if they presented with non-specific complaints. After having obtained complete 30-day follow-up, two outcome assessors reviewed all available information, judged whether the initial presentation was a DRP and compared their judgment with the initial ED diagnosis. Acute morbidity ("serious condition") was allocated to individual cases according to predefined criteria. RESULTS The study population consisted of 633 patients with NSC. Median age was 81 years (IQR 72/87), and the mean Charlson comorbidity index was 2.5 (IQR 1/4). DRPs were identified in 77 of the 633 cases (12.2%). At the initial assessment, only 40% of the DRPs were correctly identified. 64 of the 77 identified DRPs (83%) fulfilled the criteria "serious condition". Polypharmacy and certain drug classes (thiazides, antidepressants, benzodiazepines, anticonvulsants) were associated with DRPs. CONCLUSION Elderly patients with non-specific complaints need to be screened systematically for drug-related problems. TRIAL REGISTRATION ClinicalTrials.gov: NCT00920491.
Resumo:
BACKGROUND: The assessment of driving-relevant cognitive functions in older drivers is a difficult challenge as there is no clear-cut dividing line between normal cognition and impaired cognition and not all cognitive functions are equally important for driving. METHODS: To support decision makers, the Bern Cognitive Screening Test (BCST) for older drivers was designed. It is a computer-assisted test battery assessing visuo-spatial attention, executive functions, eye-hand coordination, distance judgment, and speed regulation. Here we compare the performance in BCST with the performance in paper and pencil cognitive screening tests and the performance in the driving simulator testing of 41 safe drivers (without crash history) and 14 unsafe drivers (with crash history). RESULTS: Safe drivers performed better than unsafe drivers in BCST (Mann-Whitney U test: U = 125.5; p = 0.001) and in the driving simulator (Student's t-test: t(44) = -2.64, p = 0.006). No clear group differences were found in paper and pencil screening tests (p > 0.05; ns). BCST was best at identifying older unsafe drivers (sensitivity 86%; specificity 61%) and was also better tolerated than the driving simulator test with fewer dropouts. CONCLUSIONS: BCST is more accurate than paper and pencil screening tests, and better tolerated than driving simulator testing when assessing driving-relevant cognition in older drivers.
Resumo:
Background. There are two child-specific fracture classification systems for long bone fractures: the AO classification of pediatric long-bone fractures (PCCF) and the LiLa classification of pediatric fractures of long bones (LiLa classification). Both are still not widely established in comparison to the adult AO classification for long bone fractures. Methods. During a period of 12 months all long bone fractures in children were documented and classified according to the LiLa classification by experts and non-experts. Intraobserver and interobserver reliability were calculated according to Cohen (kappa). Results. A total of 408 fractures were classified. The intraobserver reliability for location in the skeletal and bone segment showed an almost perfect agreement (K=0.91-0.95) and also the morphology (joint/shaft fracture) (K=0.87-0.93). Due to different judgment of the fracture displacement in the second classification round, the intraobserver reliability of the whole classification revealed moderate agreement (K=0.53-0.58). Interobserver reliability showed moderate agreement (K=0.55) often due to the low quality of the X-rays. Further differences occurred due to difficulties in assigning the precise transition from metaphysis to diaphysis. Conclusions. The LiLa classification is suitable and in most cases user-friendly for classifying long bone fractures in children. Reliability is higher than in established fracture specific classifications and comparable to the AO classification of pediatric long bone fractures. Some mistakes were due to a low quality of the X-rays and some due to difficulties to classify the fractures themselves. Improvements include a more precise definition of the metaphysis and the kind of displacement. Overall the LiLa classification should still be considered as an alternative for classifying pediatric long bone fractures.
Resumo:
BACKGROUND: The purpose of this study was to investigate the scale recalibration construct of response shift and its relationship to glycemic control in children with diabetes. METHODS: At year 1, thirty-eight children with type 1 diabetes attending a diabetes summer camp participated. At baseline and post-camp they completed the Problem Areas in Diabetes (PAID) questionnaire. Post-camp, the PAID was also completed using the 'thentest' method, which requires a retrospective judgment about their baseline functioning. At year 2, fifteen of the original participants reported their HbA1c. RESULTS: PAID scores significantly decreased from baseline to post-camp. An even larger difference was found between thentest and post-camp scores, suggesting scale recalibration. There was a significant positive correlation between year 1 HbA1c and thentest scores. Partial correlation analysis between PAID thentest scores and year 2 HbA1c, controlling for year 1 HbA1c, showed that higher PAID thentest scores were associated with higher year 2 HbA1c. CONCLUSION: Results from this small sample suggest that children with diabetes do show scale recalibration, and that it may be related to glycemic control.
Resumo:
Current diagnostic definitions of psychiatric disorders based on collections of symptoms encompass very heterogeneous populations and are thus likely to yield spurious results when exploring biological correlates of mental disturbances. It has been suggested that large studies of biomarkers across diagnostic entities may yield improved clinical information. Such a view is based on the concept of assessment as a collection of symptoms devoid of any clinical judgment and interpretation. Yet, important advances have been made in recent years in clinimetrics, the science of clinical judgment. The current clinical taxonomy in psychiatry, which emphasizes reliability at the cost of clinical validity, does not include effects of comorbid conditions, timing of phenomena, rate of progression of an illness, responses to previous treatments, and other clinical distinctions that demarcate major prognostic and therapeutic differences among patients who otherwise seem to be deceptively similar since they share the same psychiatric diagnosis. Clinimetrics may provide the missing link between clinical states and biomarkers in psychiatry, building pathophysiological bridges from clinical manifestations to their neurobiological counterparts.
Resumo:
Transcatheter aortic valve implantation (TAVI) is a novel therapy, which has transformed the management of inoperable patients presenting with symptomatic severe aortic stenosis (AS). It is also a proven and less invasive alternative therapeutic option for high-risk symptomatic patients presenting with severe AS who are otherwise eligible for surgical aortic valve replacement. Patient age is not strictly a limitation for TAVI but since this procedure is currently restricted to high-risk and inoperable patients, it follows that most patients selected for TAVI are at an advanced age. Patient frailty and co-morbidities need to be assessed and a clinical judgment made on whether the patient will gain a measureable improvement in their quality of life. Risk stratification has assumed a central role in selecting suitable patients and surgical risk algorithms have proven helpful in this regard. However, limitations exist with these risk models, which must be understood in the context of TAVI. When making final treatment decisions, it is essential that a collaborative multidisciplinary "heart team" be involved and this is stressed in the most recent guidelines of the European Society of Cardiology. Choosing the best procedure is contingent upon anatomical feasibility, and multimodality imaging has emerged as an integral component of the pre-interventional screening process in this regard. The transfemoral route is now considered the default approach although vascular complications remain a concern. A minimal vessel diameter of 6 mm is required for currently commercial available vascular introducer sheaths. Several alternative access routes are available to choose from when confronted with difficult iliofemoral anatomy such as severe peripheral vascular disease or diffuse circumferential vessel calcification. The degree of aortic valve leaflet and annular calcification also needs to be assessed as the latter is a risk factor for post-procedural paravalvular aortic regurgitation. The ultimate goal of patient selection is to achieve the highest procedural success rate while minimizing complications and to choose patients most likely to derive tangible benefit from this procedure.