909 resultados para ambiguity aversion


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The literature on fiscal food policies focuses on their effectiveness in altering diets and improving health, while this paper focuses on their welfare costs. A formal welfare economics framework is developed to calculate the combined individualistic and distributional impacts of a tax-subsidy. Distributional characteristics of foods targeted by a tax tend to be concentrated in lower-income households. Further, consumption of fruit and vegetables tends to be concentrated in higher-income households; therefore, a subsidy on such foods increases regressivity. Aggregate welfare changes that result from a fiscal food policy are found to range from an increase of 1.41 per cent to a reduction of 2.06 per cent according to whether a subsidy is included, the degree of inequality aversion, and whether substitution among foods is allowed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data are presented for a nighttime ion heating event observed by the EISCAT radar on 16 December 1988. In the experiment, the aspect angle between the radar beam and the geomagnetic field was fixed at 54.7°, which avoids any ambiguity in derived ion temperature caused by anisotropy in the ion velocity distribution function. The data were analyzed with an algorithm which takes account of the non-Maxwellian line-of-sight ion velocity distribution. During the heating event, the derived spectral distortion parameter (D∗) indicated that the distribution function was highly distorted from a Maxwellian form when the ion drift increased to 4 km s−1. The true three-dimensional ion temperature was used in the simplified ion balance equation to compute the ion mass during the heating event. The ion composition was found to change from predominantly O4 to mainly molecular ions. A theoretical analysis of the ion composition, using the MSIS86 model and published values of the chemical rate coefficients, accounts for the order-of-magnitude increase in the atomic/molecular ion ratio during the event, but does not successfully explain the very high proportion of molecular ions that was observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present observations of a transient event in the dayside auroral ionosphere at magnetic noon. F-region plasma convection measurements were made by the EISCAT radar, operating in the beamswinging “Polar” experiment mode, and simultaneous observations of the dayside auroral emissions were made by optical meridian-scanning photometers and all-sky TV cameras at Ny Ålesund, Spitzbergen. The data were recorded on 9 January 1989, and a sequence of bursts of flow, with associated transient aurora, were observed between 08:45 and 11:00 U.T. In this paper we concentrate on an event around 09:05 U.T. because that is very close to local magnetic noon. The optical data show a transient intensification and widening (in latitude) of the cusp/cleft region, as seen in red line auroral emissions. Over an interval of about 10 min, the band of 630 nm aurora widened from about 1.5° of invariant latitude to over 5° and returned to its original width. Embedded within the widening band of 630 nm emissions were two intense, active 557.7 nm arc fragments with rays which persisted for about 2 min each. The flow data before and after the optical transient show eastward flows, with speeds increasing markedly with latitude across the band of 630 nm aurora. Strong, apparently westward, flows appeared inside the band while it was widening, but these rotated round to eastward, through northward, as the band shrunk to its original width. The observed ion temperatures verify that the flow speeds during the transient were, to a large extent, as derived using the beamswinging technique; but they also show that the flow increase initially occurred in the western azimuth only. This spatial gradient in the flow introduces ambiguity in the direction of these initial flows and they could have been north-eastward rather than westward. However, the westward direction derived by the beamswinging is consistent with the motion of the colocated and coincident active 557.7 nm arc fragment, A more stable transient 557.7 nm aurora was found close to the shear between the inferred westward flows and the persisting eastward flows to the North. Throughout the transient, northward flow was observed across the equatorward boundary of the 630 nm aurora. Interpretation of the data is made difficult by lack of IMF data, problems in distinguishing the cusp and cleft aurora and uncertainty over which field lines are open and which are closed. However, at magnetic noon there is a 50% probability that we were observing the cusp, in which case from its southerly location we infer that the IMF was southward and many features are suggestive of time-varying reconnection at a single X-line on the dayside magnetopause. This IMF orientation is also consistent with the polar rain precipitation observed simultaneously by the DMSP-F9 satellite in the southern polar cap. There is also a 25% chance that we were observing the cleft (or the mantle poleward of the cleft). In this case we infer that the IMF was northward and the transient is well explained by reconnection which is not only transient in time but occurs at various sites located randomly on the dayside magnetopause (i.e. patchy in space). Lastly, there is a 25% chance that we were observing the cusp poleward of the cleft, in which case we infer that IMF Bz was near zero and the transient is explained by a mixture of the previous two interpretations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Land cover plays a key role in global to regional monitoring and modeling because it affects and is being affected by climate change and thus became one of the essential variables for climate change studies. National and international organizations require timely and accurate land cover information for reporting and management actions. The North American Land Change Monitoring System (NALCMS) is an international cooperation of organizations and entities of Canada, the United States, and Mexico to map land cover change of North America's changing environment. This paper presents the methodology to derive the land cover map of Mexico for the year 2005 which was integrated in the NALCMS continental map. Based on a time series of 250 m Moderate Resolution Imaging Spectroradiometer (MODIS) data and an extensive sample data base the complexity of the Mexican landscape required a specific approach to reflect land cover heterogeneity. To estimate the proportion of each land cover class for every pixel several decision tree classifications were combined to obtain class membership maps which were finally converted to a discrete map accompanied by a confidence estimate. The map yielded an overall accuracy of 82.5% (Kappa of 0.79) for pixels with at least 50% map confidence (71.3% of the data). An additional assessment with 780 randomly stratified samples and primary and alternative calls in the reference data to account for ambiguity indicated 83.4% overall accuracy (Kappa of 0.80). A high agreement of 83.6% for all pixels and 92.6% for pixels with a map confidence of more than 50% was found for the comparison between the land cover maps of 2005 and 2006. Further wall-to-wall comparisons to related land cover maps resulted in 56.6% agreement with the MODIS land cover product and a congruence of 49.5 with Globcover.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article proposes an auction model where two firms compete for obtaining the license for a public project and an auctioneer acting as a public official representing the political power, decides the winner of the contest. Players as firms face a social dilemma in the sense that the higher is the bribe offered, the higher would be the willingness of a pure monetary maximizer public official to give her the license. However, it implies inducing a cost of reducing all players’ payoffs as far as our model includes an endogenous externality, which depends on bribe. All players’ payoffs decrease with the bribe (and increase with higher quality). We find that the presence of bribe aversion in either the officials’ or the firms’ utility function shifts equilibrium towards more pro-social behavior. When the quality and bribe-bid strategy space is discrete, multiple equilibria emerge including more pro-social bids than would be predicted under a continuous strategy space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among existing remote sensing applications, land-based X-band radar is an effective technique to monitor the wave fields, and spatial wave information could be obtained from the radar images. Two-dimensional Fourier Transform (2-D FT) is the common algorithm to derive the spectra of radar images. However, the wave field in the nearshore area is highly non-homogeneous due to wave refraction, shoaling, and other coastal mechanisms. When applied in nearshore radar images, 2-D FT would lead to ambiguity of wave characteristics in wave number domain. In this article, we introduce two-dimensional Wavelet Transform (2-D WT) to capture the non-homogeneity of wave fields from nearshore radar images. The results show that wave number spectra by 2-D WT at six parallel space locations in the given image clearly present the shoaling of nearshore waves. Wave number of the peak wave energy is increasing along the inshore direction, and dominant direction of the spectra changes from South South West (SSW) to West South West (WSW). To verify the results of 2-D WT, wave shoaling in radar images is calculated based on dispersion relation. The theoretical calculation results agree with the results of 2-D WT on the whole. The encouraging performance of 2-D WT indicates its strong capability of revealing the non-homogeneity of wave fields in nearshore X-band radar images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We monitored 8- and 10-year-old children’s eye movements as they read sentences containing a temporary syntactic ambiguity to obtain a detailed record of their online processing. Children showed the classic garden-path effect in online processing. Their reading was disrupted following disambiguation, relative to control sentences containing a comma to block the ambiguity, although the disruption occurred somewhat later than would be expected for mature readers. We also asked children questions to probe their comprehension of the syntactic ambiguity offline. They made more errors following ambiguous sentences than following control sentences, demonstrating that the initial incorrect parse of the garden-path sentence influenced offline comprehension. These findings are consistent with “good enough” processing effects seen in adults. While faster reading times and more regressions were generally associated with better comprehension, spending longer reading the question predicted comprehension success specifically in the ambiguous condition. This suggests that reading the question prompted children to reconstruct the sentence and engage in some form of processing, which in turn increased the likelihood of comprehension success. Older children were more sensitive to the syntactic function of commas, and, overall, they were faster and more accurate than younger children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While there has been a fair amount of research investigating children’s syntactic processing during spoken language comprehension, and a wealth of research examining adults’ syntactic processing during reading, as yet very little research has focused on syntactic processing during text reading in children. In two experiments, children and adults read sentences containing a temporary syntactic ambiguity while their eye movements were monitored. In Experiment 1, participants read sentences such as, ‘The boy poked the elephant with the long stick/trunk from outside the cage’ in which the attachment of a prepositional phrase was manipulated. In Experiment 2, participants read sentences such as, ‘I think I’ll wear the new skirt I bought tomorrow/yesterday. It’s really nice’ in which the attachment of an adverbial phrase was manipulated. Results showed that adults and children exhibited similar processing preferences, but that children were delayed relative to adults in their detection of initial syntactic misanalysis. It is concluded that children and adults have the same sentence-parsing mechanism in place, but that it operates with a slightly different time course. In addition, the data support the hypothesis that the visual processing system develops at a different rate than the linguistic processing system in children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ‘Public interest’, even if viewed with ambiguity or scepticism, has been one of the primary means by which various professional roles of planners have been justified. Many objections to the concept have been advanced by writers in planning academia. Notwithstanding these, ‘public interest’ continues to be mobilised, to justify, defend or argue for planning interventions and reforms. This has led to arguments that planning will have to adopt and recognise some form of public interest in practice to legitimise itself.. This paper explores current debates around public interest and social justice and advances a vision of the public interest informed by complexity theory. The empirical context of the paper is the poverty alleviation programme, the Kudumbashree project in Kerala, India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We let subjects take risky decisions that affect themselves and a passive recipient. Adding a requirement to justify their choices significantly reduces loss aversion. This indicates that such an accountability mechanism may be effective at debiasing loss aversion in agency relations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We systematically explore decision situations in which a decision maker bears responsibility for somebody else's outcomes as well as for her own in situations of payoff equality. In the gain domain we confirm the intuition that being responsible for somebody else's payoffs increases risk aversion. This is however not attributable to a 'cautious shift' as often thought. Indeed, looking at risk attitudes in the loss domain, we find an increase in risk seeking under responsibility. This raises issues about the nature of various decision biases under risk, and to what extent changed behavior under responsibility may depend on a social norm of caution in situations of responsibility versus naive corrections from perceived biases. To further explore this issue, we designed a second experiment to explore risk-taking behavior for gain prospects offering very small or very large probabilities of winning. For large probabilities, we find increased risk aversion, thus confirming our earlier finding. For small probabilities however, we find an increase of risk seeking under conditions of responsibility. The latter finding thus discredits hypotheses of a social rule dictating caution under responsibility, and can be explained through flexible self-correction models predicting an accentuation of the fourfold pattern of risk attitudes predicted by prospect theory. An additional accountability mechanism does not change risk behavior, except for mixed prospects, in which it reduces loss aversion. This indicates that loss aversion is of a fundamentally different nature than probability weighting or utility curvature. Implications for debiasing are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economic theory makes no predictions about social factors affecting decisions under risk. We examine situations in which a decision maker decides for herself and another person under conditions of payoff equality, and compare them to individual decisions. By estimating a structural model, we find that responsibility leaves utility curvature unaffected, but accentuates the subjective distortion of very small and very large probabilities for both gains and losses. We also find that responsibility reduces loss aversion, but that these results only obtain under some specific definitions of the latter. These results serve to generalize and reconcile some of the still largely contradictory findings in the literature. They also have implications for financial agency, which we discuss.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter re-evaluates the diachronic, evolutionist model that establishes the Second World War as a watershed between classical and modern cinemas, and ‘modernity’ as the political project of ‘slow cinema’. I will start by historicising the connection between cinematic speed and modernity, going on to survey the veritable obsession with the modern that continues to beset film studies despite the vagueness and contradictions inherent in the term. I will then attempt to clarify what is really at stake within the modern-classical debate by analysing two canonical examples of Japanese cinema, drawn from the geidomono genre (films on the lives of theatre actors), Kenji Mizoguchi’s Story of the Late Chrysanthemums (Zangiku monogatari, 1939) and Yasujiro Ozu’s Floating Weeds (Ukigusa, 1954), with a view to investigating the role of the long take or, conversely, classical editing, in the production or otherwise of a supposed ‘slow modernity’. By resorting to Ozu and Mizoguchi, I hope to demonstrate that the best narrative films in the world have always combined a ‘classical’ quest for perfection with the ‘modern’ doubt of its existence, hence the futility of classifying cinema in general according to an evolutionary and Eurocentric model based on the classical-modern binary. Rather than on a confusing politics of the modern, I will draw on Bazin’s prophetic insight of ‘impure cinema’, a concept he forged in defence of literary and theatrical screen adaptations. Anticipating by more than half a century the media convergence on which the near totality of our audiovisual experience is currently based, ‘impure cinema’ will give me the opportunity to focus on the confluence of film and theatre in these Mizoguchi and Ozu films as the site of a productive crisis where established genres dissolve into self-reflexive stasis, ambiguity of expression and the revelation of the reality of the film medium, all of which, I argue, are more reliable indicators of a film’s political programme than historical teleology. At the end of the journey, some answers may emerge to whether the combination of the long take and the long shot are sufficient to account for a film’s ‘slowness’ and whether ‘slow’ is indeed the best concept to signify resistance to the destructive pace of capitalism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2013 the Warsaw International Mechanism (WIM) for loss and damage (L&D) associated with climate change impacts was established under the United Nations Framework Convention on Climate Change (UNFCCC). For scientists, L&D raises ques- tions around the extent that such impacts can be attributed to anthropogenic climate change, which may generate complex results and be controversial in the policy arena. This is particularly true in the case of probabilistic event attribution (PEA) science, a new and rapidly evolving field that assesses whether changes in the probabilities of extreme events are attributable to GHG emissions. If the potential applications of PEA are to be considered responsibly, dialogue between scientists and policy makers is fundamental. Two key questions are considered here through a literature review and key stakeholder interviews with representatives from the science and policy sectors underpinning L&D. These provided the opportunity for in-depth insights into stakeholders’ views on firstly, how much is known and understood about PEA by those associated with the L&D debate? Secondly, how might PEA inform L&D and wider climate policy? Results show debate within the climate science community, and limited understanding among other stakeholders, around the sense in which extreme events can be attributed to climate change. However, stake- holders do identify and discuss potential uses for PEA in the WIM and wider policy, but it remains difficult to explore precise applications given the ambiguity surrounding L&D. This implies a need for stakeholders to develop greater understandings of alternative conceptions of L&D and the role of science, and also identify how PEA can best be used to support policy, and address associated challenges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Systematic review (SR) is a rigorous, protocol-driven approach designed to minimise error and bias when summarising the body of research evidence relevant to a specific scientific question. Taking as a comparator the use of SR in synthesising research in healthcare, we argue that SR methods could also pave the way for a “step change” in the transparency, objectivity and communication of chemical risk assessments (CRA) in Europe and elsewhere. We suggest that current controversies around the safety of certain chemicals are partly due to limitations in current CRA procedures which have contributed to ambiguity about the health risks posed by these substances. We present an overview of how SR methods can be applied to the assessment of risks from chemicals, and indicate how challenges in adapting SR methods from healthcare research to the CRA context might be overcome. Regarding the latter, we report the outcomes from a workshop exploring how to increase uptake of SR methods, attended by experts representing a wide range of fields related to chemical toxicology, risk analysis and SR. Priorities which were identified include: the conduct of CRA-focused prototype SRs; the development of a recognised standard of reporting and conduct for SRs in toxicology and CRA; and establishing a network to facilitate research, communication and training in SR methods. We see this paper as a milestone in the creation of a research climate that fosters communication between experts in CRA and SR and facilitates wider uptake of SR methods into CRA.