932 resultados para due credibility


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction. Erroneous answers in studies on the misinformation effect (ME) can be reduced in different ways. In some studies, ME was reduced by SM questions, warnings, or a low credibility of the source of post-event information (PEI). Results are inconsistent, however. Of course, a participant can deliberately decide to refrain from reporting a critical item only when the difference between the original event and the PEI is distinguishable in principle. We were interested in the question to what extent the influence of erroneous information on a central aspect of the original event can be reduced by different means applied singly or in combination. Method. With a 2 (credibility; high vs. low) x 2 (warning; present vs. absent) between subjects design and an additional control group that received neither misinformation nor a warning (N = 116), we examined the above-mentioned factors’ influence on the ME. Participants viewed a short video of a robbery. The critical item suggested in the PEI was that the victim was given a kick by the perpetrator (which he was actually not). The memory test consisted of a two-forced-choice recognition test followed by a SM test. Results. To our surprise, neither a main effect of erroneous PEI nor a main effect of credibility was found. The error rates for the critical item in the control group (50%) as well as in the high (65%) and low (52%) credibility condition without warning did not significantly differ. A warning about possible misleading information in the PEI significantly reduced the influence of misinformation in both credibility conditions by 32-37%. Using a SM question significantly reduced the error rate too, but only in the high credibility no warning condition. Conclusion and Future Research. Our results show that, contrary to a warning or the use of a SM question, low source credibility did not reduce the ME. The most striking finding was, however, the absence of a main effect of erroneous PEI. Due to the high error rate in the control group, we suspect that the wrong answers might have been caused either by the response format (recognition test) or by autosuggestion possibly promoted by the high schema-consistency of the critical item. First results of a post-study in which we used open-ended questions before the recognition test support the former assumption. Results of a replication of this study using open-ended questions prior to the recognition test will be available by June.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the aftermath of the crisis, new instruments of economic governance have been adopted at the EU level. Until recently, these have been strongly dominated by what I assume to be the ECFIN coalition. However, at least since 2011, this coalition’s supremacy has been challenged by the competing coalition’s (EPSCO) willingness to rebalance the economic governance so that social concerns are better taken into account. Hence, drawing on the agenda-setting literature in the EU context, this working paper aims at retracing the process that has led to put this issue of the social dimension of the EMU on to the EU political agenda. Three hypotheses are made concerning the rise of this issue, the strategies employed by agenda-setters, and the policy subsystem of the economic governance. First, this study shows that the interest in this issue has been gradually fostered ‘from below’, at the level of the European Parliament and the European Commission. Second, due to its ‘high politics’ nature, this issue could only be initiated ‘from above’ (European Council) and then expanded to lower levels of decision-making (Commission). Specifically, DG EMPL has managed to attract attention to this issue and to build its credibility in dealing with it by strategically framing the issue and directing it towards the EPSCO venue. Finally, I analyze the outcome of this agenda-setting process by assessing to what extent the two new social scoreboards which form part of this social dimension have been taken into account during the 2014 European semester. The result of this analysis is that the new economic governance has not been genuinely rebalanced insofar as its dominant policy core remains that of the ECFIN coalition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Credible endorsers are often used in advertisements. However, there is conflicting evidence on the role source credibility plays in persuasion. Early research found that source credibility affects persuasion when subjects pay attention to the communication. Other research indicates that a credible source enhances persuasion when people do not scrutinize the message claims carefully and thoroughly. This effect is opposite to what was indicated by early research. More recent research indicates that source credibility may affect persuasion when people scrutinize the message claims, but limits this effect to advertisements with certain type of claims (i.e., ambiguous or extreme claims). This dissertation proposes that source credibility might play a broader role during persuasion than suggested by the empirical literature. Source credibility may affect persuasion, at low levels of involvement, by serving as a peripheral cue. It may also affect persuasion, at high involvement, by serving as an argument or biasing elaboration. ^ Each of these possibilities was explored in an experiment using a 3 (source credibility) x 2 (type of claim) x 2 (levels of involvement) full factorial design. The sample consisted of 180 undergraduate students from a major southeastern University. ^ Results indicated that, at high levels of involvement, the credibility of the source affected persuasion. This effect was due to source credibility acting as an argument within the advertisement. This study did not find that source credibility affected persuasion by biasing elaboration, at high involvement, or by serving as a peripheral cue, at low involvement. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To explore the effects of glaucoma and aging on low-spatial-frequency contrast sensitivity by using tests designed to assess performance of either the magnocellular (M) or parvocellular (P) visual pathways. METHODS: Contrast sensitivity was measured for spatial frequencies of 0.25 to 2 cyc/deg by using a published steady- and pulsed-pedestal approach. Sixteen patients with glaucoma and 16 approximately age-matched control subjects participated. Patients with glaucoma were tested foveally and at two midperipheral locations: (1) an area of early visual field loss, and (2) an area of normal visual field. Control subjects were assessed in matched locations. An additional group of 12 younger control subjects (aged 20-35 years) were also tested. RESULTS: Older control subjects demonstrated reduced sensitivity relative to the younger group for the steady (presumed M)- and pulsed (presumed P)-pedestal conditions. Sensitivity was reduced foveally and in the midperiphery across the spatial frequency range. In the area of early visual field loss, the glaucoma group demonstrated further sensitivity reduction relative to older control subjects across the spatial frequency range for both the steady- and pulsed-pedestal tasks. Sensitivity was also reduced in the midperipheral location of "normal" visual field for the pulsed condition. CONCLUSIONS: Normal aging results in a reduction of contrast sensitivity for the low-spatial-frequency-sensitive components of both the M and P pathways. Glaucoma results in a further reduction of sensitivity that is not selective for M or P function. The low-spatial-frequency-sensitive channels of both pathways, which are presumably mediated by cells with larger receptive fields, are approximately equivalently impaired in early glaucoma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Channel measurements and simulations have been carried out to observe the effects of pedestrian movement on multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) channel capacity. An in-house built MIMO-OFDM packet transmission demonstrator equipped with four transmitters and four receivers has been utilized to perform channel measurements at 5.2 GHz. Variations in the channel capacity dynamic range have been analysed for 1 to 10 pedestrians and different antenna arrays (2 × 2, 3 × 3 and 4 × 4). Results show a predicted 5.5 bits/s/Hz and a measured 1.5 bits/s/Hz increment in the capacity dynamic range with the number of pedestrian and the number of antennas in the transmitter and receiver array.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We assess the increase in particle number emissions from motor vehicles driving at steady speed when forced to stop and accelerate from rest. Considering the example of a signalized pedestrian crossing on a two-way single-lane urban road, we use a complex line source method to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses and show that the total emissions during a red light is significantly higher than during the time when the light remains green. Replacing two cars with one bus increased the emissions by over an order of magnitude. Considering these large differences, we conclude that the importance attached to particle number emissions in traffic management policies be reassessed in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Shrinkage cracking is commonly observed in concrete flat structures such as highway pavements, slabs, and bridge decks. Crack spacing due to shrinkage has received considerable attention for many years [1-3]. However, some aspects concerning the mechanism of crack spacing still remain un-clear. Though it is well known that the interval of the cracks generally falls with a range, no satisfactory explanation has been put forward as to why the minimum spacing exists.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The range of political information sources available to modern Australians is greater and more varied today than at any point in the nation’s history, incorporating print, broadcast, Internet, mainstream and non-mainstream media. In such a competitive media environment, the factors which influence the selection of some information sources above others are of interest to political agents, media institutions and communications researchers alike. A key factor in information source selection is credibility. At the same time that the range of political information sources is increasing rapidly, due to the development of new information and communication technologies, audience research suggests that trust in mainstream media organisations in many countries is declining. So if people distrust the mainstream media, but have a vast array of alternative political information sources available to them, what do their personal media consumption patterns look like? How can we analyse such media consumption patterns in a meaningful way? In this paper I will briefly map the development of media credibility research in the US and Australia, leading to a discussion of one of the most recent media credibility constructs to be shown to influence political information consumption, media scepticism. Looking at the consequences of media scepticism, I will then consider the associated media consumption construct, media diet, and evaluate its usefulness in an Australian, as opposed to US, context. Finally, I will suggest alternative conceptualisations of media diets which may be more suited to Australian political communications research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is now well known that pesticide spraying by farmers has an adverse impact on their health. This is especially so in developing countries where pesticide spraying is undertaken manually. The estimated health costs are large. Studies to date have examined farmers’ exposure to pesticides, the costs of ill-health and their determinants based on information provided by farmers. Hence, some doubt has been cast on the reliability of such studies. In this study, we rectify this situation by conducting surveys among two groups of farmers. Farmers who perceive that their ill-health is due to exposure to pesticides and obtained treatment and farmers whose ill-health have been diagnosed by doctors and who have been treated in hospital for exposure to pesticides. In the paper, cost comparisons between the two groups of farmers are made. Furthermore, regression analysis of the determinants of health costs show that the quantity of pesticides used per acre per month, frequency of pesticide use and number of pesticides used per hour per day are the most important determinants of medical costs for both samples. The results have important policy implications.