9 resultados para 15 PAHs, see dataset comment
em CentAUR: Central Archive University of Reading - UK
Resumo:
Different systems, different purposes – but how do they compare as learning environments? We undertook a survey of students at the University, asking whether they learned from their use of the systems, whether they made contact with other students through them, and how often they used them. Although it was a small scale survey, the results are quite enlightening and quite surprising. Blackboard is populated with learning material, has all the students on a module signed up to it, a safe environment (in terms of Acceptable Use and some degree of staff monitoring) and provides privacy within the learning group (plus lecturer and relevant support staff). Facebook, on the other hand, has no learning material, only some of the students using the system, and on the face of it, it has the opportunity for slips in privacy and potential bullying because the Acceptable Use policy is more lax than an institutional one, and breaches must be dealt with on an exception basis, when reported. So why do more students find people on their courses through Facebook than Blackboard? And why are up to 50% of students reporting that they have learned from using Facebook? Interviews indicate that students in subjects which use seminars are using Facebook to facilitate working groups – they can set up private groups which give them privacy to discuss ideas in an environment which perceived as safer than Blackboard can provide. No staff interference, unless they choose to invite them in, and the opportunity to select who in the class can engage. The other striking finding is the difference in use between the genders. Males are using blackboard more frequently than females, whilst the reverse is true for Facebook. Interviews suggest that this may have something to do with needing to access lecture notes… Overall, though, it appears that there is little relationship between the time spent engaging with Blackboard and reports that students have learned from it. Because Blackboard is our central repository for notes, any contact is likely to result in some learning. Facebook, however, shows a clear relationship between frequency of use and perception of learning – and our students post frequently to Facebook. Whilst much of this is probably trivia and social chit chat, the educational elements of it are, de facto, contructivist in nature. Further questions need to be answered - Is the reason the students learn from Facebook because they are creating content which others will see and comment on? Is it because they can engage in a dialogue, without the risk of interruption by others?
Resumo:
A recent paper published in this journal considers the numerical integration of the shallow-water equations using the leapfrog time-stepping scheme [Sun Wen-Yih, Sun Oliver MT. A modified leapfrog scheme for shallow water equations. Comput Fluids 2011;52:69–72]. The authors of that paper propose using the time-averaged height in the numerical calculation of the pressure-gradient force, instead of the instantaneous height at the middle time step. The authors show that this modification doubles the maximum Courant number (and hence the maximum time step) at which the integrations are stable, doubling the computational efficiency. Unfortunately, the pressure-averaging technique proposed by the authors is not original. It was devised and published by Shuman [5] and has been widely used in the atmosphere and ocean modelling community for over 40 years.
Resumo:
Our differences are three. The first arises from the belief that "... a nonzero value for the optimally chosen policy instrument implies that the instrument is efficient for redistribution" (Alston, Smith, and Vercammen, p. 543, paragraph 3). Consider the two equations: (1) o* = f(P3) and (2) = -f(3) ++r h* (a, P3) representing the solution to the problem of maximizing weighted, Marshallian surplus using, simultaneously, a per-unit border intervention, 9, and a per-unit domestic intervention, wr. In the solution, parameter ot denotes the weight applied to producer surplus; parameter p denotes the weight applied to government revenues; consumer surplus is implicitly weighted one; and the country in question is small in the sense that it is unable to affect world price by any of its domestic adjustments (see the Appendix). Details of the forms of the functions f((P) and h(ot, p) are easily derived, but what matters in the context of Alston, Smith, and Vercammen's Comment is: Redistributivep referencest hatf avorp roducers are consistent with higher values "alpha," and whereas the optimal domestic intervention, 7r*, has both "alpha and beta effects," the optimal border intervention, r*, has only a "beta effect,"-it does not have a redistributional role. Garth Holloway is reader in agricultural economics and statistics, Department of Agricultural and Food Economics, School of Agriculture, Policy, and Development, University of Reading. The author is very grateful to Xavier Irz, Bhavani Shankar, Chittur Srinivasan, Colin Thirtle, and Richard Tiffin for their comments and their wisdom; and to Mario Mazzochi, Marinos Tsigas, and Cal Turvey for their scholarship, including help in tracking down a fairly complete collection of the papers that cite Alston and Hurd. They are not responsible for any errors or omissions. Note, in equation (1), that the border intervention is positive whenever a distortion exists because 8 > 0 implies 3 - 1 + 8 > 1 and, thus, f((P) > 0 (see Appendix). Using Alston, Smith, and Vercammen's definition, the instrument is now "efficient," and therefore has a redistributive role. But now, suppose that the distortion is removed so that 3 - 1 + 8 = 1, 8 = 0, and consequently the border intervention is zero. According to Alston, Smith, and Vercammen, the instrument is now "inefficient" and has no redistributive role. The reader will note that this thought experiment has said nothing about supporting farm incomes, and so has nothing whatsoever to do with efficient redistribution. Of course, the definition is false. It follows that a domestic distortion arising from the "excess-burden argument" 3 = 1 + 8, 8 > 0 does not make an export subsidy "efficient." The export subsidy, having only a "beta effect," does not have a redistributional role. The second disagreement emerges from the comment that Holloway "... uses an idiosyncratic definition of the relevant objective function of the government (Alston, Smith, and Vercammen, p. 543, paragraph 2)." The objective function that generates equations (1) and (2) (see the Appendix) is the same as the objective function used by Gardner (1995) when he first questioned Alston, Carter, and Smith's claim that a "domestic distortion can make a border intervention efficient in transferring surplus from consumers and taxpayers to farmers." The objective function used by Gardner (1995) is the same objective function used in the contributions that precede it and thus defines the literature on the debate about borderversus- domestic intervention (Streeten; Yeh; Paarlberg 1984, 1985; Orden; Gardner 1985). The objective function in the latter literature is the same as the one implied in another literature that originates from Wallace and includes most notably Gardner (1983), but also Alston and Hurd. Amer. J. Agr. Econ. 86(2) (May 2004): 549-552 Copyright 2004 American Agricultural Economics Association This content downloaded on Tue, 15 Jan 2013 07:58:41 AM All use subject to JSTOR Terms and Conditions 550 May 2004 Amer. J. Agr. Econ. The objective function in Holloway is this same objective function-it is, of course, Marshallian surplus.1 The third disagreement concerns scholarship. The Comment does not seem to be cognizant of several important papers, especially Bhagwati and Ramaswami, and Bhagwati, both of which precede Corden (1974, 1997); but also Lipsey and Lancaster, and Moschini and Sckokai; one important aspect of Alston and Hurd; and one extremely important result in Holloway. This oversight has some unfortunate repercussions. First, it misdirects to the wrong origins of intellectual property. Second, it misleads about the appropriateness of some welfare calculations. Third, it prevents Alston, Smith, and Vercammen from linking a finding in Holloway (pp. 242-43) with an old theorem (Lipsey and Lancaster) that settles the controversy (Alston, Carter, and Smith 1993, 1995; Gardner 1995; and, presently, Alston, Smith, and Vercammen) about the efficiency of border intervention in the presence of domestic distortions.
Resumo:
Introduction: Observations of behaviour and research using eye-tracking technology have shown that individuals with Williams syndrome (WS) pay an unusual amount of attention to other people’s faces. The present research examines whether this attention to faces is moderated by the valence of emotional expression. Method: Sixteen participants with WS aged between 13 and 29 years (Mean=19 years 9 months) completed a dot-probe task in which pairs of faces displaying happy, angry and neutral expressions were presented. The performance of the WS group was compared to two groups of typically developing control participants, individually matched to the participants in the WS group on either chronological age or mental age. General mental age was assessed in the WS group using the Woodcock Johnson Test of Cognitive Ability Revised (WJ-COG-R; Woodcock & Johnson, 1989; 1990). Results: Compared to both control groups, the WS group exhibited a greater attention bias for happy faces. In contrast, no between-group differences in bias for angry faces were obtained. Conclusions: The results are discussed in relation to recent neuroimaging findings and the hypersocial behaviour that is characteristic of the WS population.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
This paper presents the PETS2009 outdoor crowd image analysis surveillance dataset and the performance evaluation of people counting, detection and tracking results using the dataset submitted to five IEEE Performance Evaluation of Tracking and Surveillance (PETS) workshops. The evaluation was carried out using well established metrics developed in the Video Analysis and Content Extraction (VACE) programme and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The comparative evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness and provides a brief analysis of the metrics themselves to provide further insights into the performance of the authors’ systems.
Resumo:
Recently, the original benchmarking methodology of the Sustainable Value approach became subjected to serious debate. While Kuosmanen and Kuosmanen (2009b) critically question its validity introducing productive efficiency theory, Figge and Hahn (2009) put forward that the implementation of productive efficiency theory severely conflicts with the original financial economics perspective of the Sustainable Value approach. We argue that the debate is very confusing because the original Sustainable Value approach presents two largely incompatible objectives. Nevertheless, we maintain that both ways of benchmarking could provide useful and moreover complementary insights. If one intends to present the overall resource efficiency of the firm from the investor's viewpoint, we recommend the original benchmarking methodology. If one on the other hand aspires to create a prescriptive tool setting up some sort of reallocation scheme, we advocate implementation of the productive efficiency theory. Although the discussion on benchmark application is certainly substantial, we should avoid the debate to become accordingly narrowed. Next to the benchmark concern, we see several other challenges considering the development of the Sustainable Value approach: (1) a more systematic resource selection, (2) the inclusion of the value chain and (3) additional analyses related to policy in order to increase interpretative power.
Resumo:
As satellite technology develops, satellite rainfall estimates are likely to become ever more important in the world of food security. It is therefore vital to be able to identify the uncertainty of such estimates and for end users to be able to use this information in a meaningful way. This paper presents new developments in the methodology of simulating satellite rainfall ensembles from thermal infrared satellite data. Although the basic sequential simulation methodology has been developed in previous studies, it was not suitable for use in regions with more complex terrain and limited calibration data. Developments in this work include the creation of a multithreshold, multizone calibration procedure, plus investigations into the causes of an overestimation of low rainfall amounts and the best way to take into account clustered calibration data. A case study of the Ethiopian highlands has been used as an illustration.