859 resultados para Attention economics
Resumo:
It is imperative that we consider the use of current and emerging technologies in terms of the nature of our learners, the physical environment of the lecture theatre, and how technology may help to support appropriate pedagogies that facilitate the capturing of student attention in active engaging learning experiences. It is argued that a re-evaluation of pedagogy is required to address the tech-savy traits of the 21st century learner and the extent to which their mobile devices are capable of not only distracting them from learning but also enhancing face-to-face learning experiences.
Resumo:
First year students attend face-to-face classes armed with an arsenal of internet enabled digital devices. The conundrum is that while these devices offer scope for enhancing opportunities for engagement in face-to-face learning, they may simultaneously distract students away from learning and compound isolation issues. This paper considers how to best to use these devices for maximum engagement in first year face-to-face learning so as to assist students in connecting with other learners and instructors within the learning environment
Resumo:
The expansion of economics to ‘non-market topics’ has received increased attention in recent years. The economics of sports (football) is such a sub-field. This paper reports empirical evidence of team and referee performances in the FIFA World Cup 2002. The results reveal that being a hosting nation has a significant impact on the probability of winning a game. Furthermore, the strength of a team measured with the FIFA World Ranking does not play the important role presumed, which indicates that the element of uncertainty is working. The findings also indicate that the influence of a referee on the game result should not be neglected. Finally, the previous World Cup experiences seem to have the strongest impact on referees' performances during the game.
Resumo:
Two perceptions of the marginality of home economics are widespread across educational and other contexts. One is that home economics and those who engage in its pedagogy are inevitably marginalised within patriarchal relations in education and culture. This is because home economics is characterised as women's knowledge, for the private domain of the home. The other perception is that only orthodox epistemological frameworks of inquiry should be used to interrogate this state of affairs. These perceptions have prompted leading theorists in the field to call for non-essentialist approaches to research in order to re-think the thinking that has produced this cul-de-sac positioning of home economics as a body of knowledge and a site of teacher practice. This thesis takes up the challenge of working to locate a space outside the frame of modernist research theory and methods, recognising that this shift in epistemology is necessary to unsettle the idea that home economics is inevitably marginalised. The purpose of the study is to reconfigure how we have come to think about home economics teachers and the profession of home economics as a site of cultural practice, in order to think it otherwise (Lather, 1991). This is done by exploring how the culture of home economics is being contested from within. To do so, the thesis uses a 'posthumanist' approach, which rejects the conception of the individual as a unitary and fixed entity, but instead as a subject in process, shaped by desires and language which are not necessarily consciously determined. This posthumanist project focuses attention on pedagogical body subjects as the 'unsaid' of home economics research. It works to transcend the modernist dualism of mind/body, and other binaries central to modernist work, including private/public, male/female,paid/unpaid, and valued/unvalued. In so doing, it refuses the simple margin/centre geometry so characteristic of current perceptions of home economics itself. Three studies make up this work. Studies one and two serve to document the disciplined body of home economics knowledge, the governance of which works towards normalisation of the 'proper' home economics teacher. The analysis of these accounts of home economics teachers by home economics teachers, reveals that home economics teachers are 'skilled' yet they 'suffer' for their profession. Further,home economics knowledge is seen to be complicit in reinforcing the traditional roles of masculinity and femininity, thereby reinforcing heterosexual normativity which is central to patriarchal society. The third study looks to four 'atypical'subjects who defy the category of 'proper' and 'normal' home economics teacher. These 'atypical' bodies are 'skilled' but fiercely reject the label of 'suffering'. The discussion of the studies is a feminist poststructural account, using Russo's (1994) notion of the grotesque body, which is emergent from Bakhtin's (1968) theory of the carnivalesque. It draws on the 'shreds' of home economics pedagogy,scrutinising them for their subversive, transformative potential. In this analysis, the giving and taking of pleasure and fun in the home economics classroom presents moments of surprise and of carnival. Foucault's notion of the construction of the ethical individual shows these 'atypical' bodies to be 'immoderate' yet striving hard to be 'continent' body subjects. This research captures moments of transgression which suggest that transformative moments are already embodied in the pedagogical practices of home economics teachers, and these can be 'seen' when re-looking through postmodemist lenses. Hence, the cultural practices ofhome economics as inevitably marginalised are being contested from within. Until now, home economics as a lived culture has failed to recognise possibilities for reconstructing its own field beyond the confines of modernity. This research is an example of how to think about home economics teachers and the profession as a reconfigured cultural practice. Future research about home economics as a body of knowledge and a site of teacher practice need not retell a simple story of oppression. Using postmodemist epistemologies is one way to provide opportunities for new ways of looking.
Resumo:
This paper proposes a method, based on polychotomous discrete choice methods, to impute a continuous measure of income when only a bracketed measure of income is available and for only a subset of the obsevations. The method is shown to perform well with CP5 data. © 1991.
Resumo:
To the Editor—In a recent review article in Infection Control and Hospital Epidemiology, Umscheid et al1 summarized published data on incidence rates of catheter-associated bloodstream infection (CABSI), catheter-associated urinary tract infection (CAUTI), surgical site infection (SSI), and ventilator- associated pneumonia (VAP); estimated how many cases are preventable; and calculated the savings in hospital costs and lives that would result from preventing all preventable cases. Providing these estimates to policy makers, political leaders, and health officials helps to galvanize their support for infection prevention programs. Our concern is that important limitations of the published studies on which Umscheid and colleagues built their findings are incompletely addressed in this review. More attention needs to be drawn to the techniques applied to generate these estimates...
Resumo:
This paper argues the case for closer attention to media economics on the part of media, communications and cultural studies researchers. It points to a plurality of approaches to media economics, that include the mainstream neoclassical school and critical political economy, but also new insights derived from perspectives that are less well-known outside of the economics discipline, such as new institutional economics and evolutionary economics. It applies these frameworks to current debates about the future of public service media (PSM), noting limitations to both ‘market failure’ and citizenship discourses, and identifying challenges relating to institutional governance, public policy and innovation as PSMs worldwide adapt to a digitally convergent media environment.
Resumo:
Eutrophication of the Baltic Sea is a serious problem. This thesis estimates the benefit to Finns from reduced eutrophication in the Gulf of Finland, the most eutrophied part of the Baltic Sea, by applying the choice experiment method, which belongs to the family of stated preference methods. Because stated preference methods have been subject to criticism, e.g., due to their hypothetical survey context, this thesis contributes to the discussion by studying two anomalies that may lead to biased welfare estimates: respondent uncertainty and preference discontinuity. The former refers to the difficulty of stating one s preferences for an environmental good in a hypothetical context. The latter implies a departure from the continuity assumption of conventional consumer theory, which forms the basis for the method and the analysis. In the three essays of the thesis, discrete choice data are analyzed with the multinomial logit and mixed logit models. On average, Finns are willing to contribute to the water quality improvement. The probability for willingness increases with residential or recreational contact with the gulf, higher than average income, younger than average age, and the absence of dependent children in the household. On average, for Finns the relatively most important characteristic of water quality is water clarity followed by the desire for fewer occurrences of blue-green algae. For future nutrient reduction scenarios, the annual mean household willingness to pay estimates range from 271 to 448 and the aggregate welfare estimates for Finns range from 28 billion to 54 billion euros, depending on the model and the intensity of the reduction. Out of the respondents (N=726), 72.1% state in a follow-up question that they are either Certain or Quite certain about their answer when choosing the preferred alternative in the experiment. Based on the analysis of other follow-up questions and another sample (N=307), 10.4% of the respondents are identified as potentially having discontinuous preferences. In relation to both anomalies, the respondent- and questionnaire-specific variables are found among the underlying causes and a departure from standard analysis may improve the model fit and the efficiency of estimates, depending on the chosen modeling approach. The introduction of uncertainty about the future state of the Gulf increases the acceptance of the valuation scenario which may indicate an increased credibility of a proposed scenario. In conclusion, modeling preference heterogeneity is an essential part of the analysis of discrete choice data. The results regarding uncertainty in stating one s preferences and non-standard choice behavior are promising: accounting for these anomalies in the analysis may improve the precision of the estimates of benefit from reduced eutrophication in the Gulf of Finland.
Resumo:
The increasingly intense competition between commercial and recreational fishermen for access to fish stocks has focused attention on the economic implications of fishery allocations. Indeed, one can scarcely find a management plan or amendment that does not at least refer to the relative food and sport values of fish and to how expenditures by commercial and recreational fishermen on equipment and supplies stimulate the economy. However, many of the arguments raised by constituents to influence such allocations, while having an seemingly "economics" ring to them, are usually incomplete, distorted, and even incorrect. This report offers fishery managers and other interested parties a guide to correct notions of economic value and to the appropriate ways to characterize, estimate, and compare value. In particular, introductory material from benefitcost analysis and input-output analysis is described and illustrated. In the process, several familiar specious arguments are exposed.(PDF file contains 34 pages.)
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
A person living in an industrialized society has almost no choice but to receive information daily with negative implications for himself or others. His attention will often be drawn to the ups and downs of economic indicators or the alleged misdeeds of leaders and organizations. Reacting to new information is central to economics, but economics typically ignores the affective aspect of the response, for example, of stress or anger. These essays present the results of considering how the affective aspect of the response can influence economic outcomes.
The first chapter presents an experiment in which individuals were presented with information about various non-profit organizations and allowed to take actions that rewarded or punished those organizations. When social interaction was introduced into this environment an asymmetry between rewarding and punishing appeared. The net effects of punishment became greater and more variable, whereas the effects of reward were unchanged. The individuals were more strongly influenced by negative social information and used that information to target unpopular organizations. These behaviors contributed to an increase in inequality among the outcomes of the organizations.
The second and third chapters present empirical studies of reactions to negative information about local economic conditions. Economic factors are among the most prevalent stressors, and stress is known to have numerous negative effects on health. These chapters document localized, transient effects of the announcement of information about large-scale job losses. News of mass layoffs and shut downs of large military bases are found to decrease birth weights and gestational ages among babies born in the affected regions. The effect magnitudes are close to those estimated in similar studies of disasters.
Resumo:
Time, risk, and attention are all integral to economic decision making. The aim of this work is to understand those key components of decision making using a variety of approaches: providing axiomatic characterizations to investigate time discounting, generating measures of visual attention to infer consumers' intentions, and examining data from unique field settings.
Chapter 2, co-authored with Federico Echenique and Kota Saito, presents the first revealed-preference characterizations of exponentially-discounted utility model and its generalizations. My characterizations provide non-parametric revealed-preference tests. I apply the tests to data from a recent experiment, and find that the axiomatization delivers new insights on a dataset that had been analyzed by traditional parametric methods.
Chapter 3, co-authored with Min Jeong Kang and Colin Camerer, investigates whether "pre-choice" measures of visual attention improve in prediction of consumers' purchase intentions. We measure participants' visual attention using eyetracking or mousetracking while they make hypothetical as well as real purchase decisions. I find that different patterns of visual attention are associated with hypothetical and real decisions. I then demonstrate that including information on visual attention improves prediction of purchase decisions when attention is measured with mousetracking.
Chapter 4 investigates individuals' attitudes towards risk in a high-stakes environment using data from a TV game show, Jeopardy!. I first quantify players' subjective beliefs about answering questions correctly. Using those beliefs in estimation, I find that the representative player is risk averse. I then find that trailing players tend to wager more than "folk" strategies that are known among the community of contestants and fans, and this tendency is related to their confidence. I also find gender differences: male players take more risk than female players, and even more so when they are competing against two other male players.
Chapter 5, co-authored with Colin Camerer, investigates the dynamics of the favorite-longshot bias (FLB) using data on horse race betting from an online exchange that allows bettors to trade "in-play." I find that probabilistic forecasts implied by market prices before start of the races are well-calibrated, but the degree of FLB increases significantly as the events approach toward the end.
Resumo:
We consider some problems of the calculus of variations on time scales. On the beginning our attention is paid on two inverse extremal problems on arbitrary time scales. Firstly, using the Euler-Lagrange equation and the strengthened Legendre condition, we derive a general form for a variation functional that attains a local minimum at a given point of the vector space. Furthermore, we prove a necessary condition for a dynamic integro-differential equation to be an Euler-Lagrange equation. New and interesting results for the discrete and quantum calculus are obtained as particular cases. Afterwards, we prove Euler-Lagrange type equations and transversality conditions for generalized infinite horizon problems. Next we investigate the composition of a certain scalar function with delta and nabla integrals of a vector valued field. Euler-Lagrange equations in integral form, transversality conditions, and necessary optimality conditions for isoperimetric problems, on an arbitrary time scale, are proved. In the end, two main issues of application of time scales in economic, with interesting results, are presented. In the former case we consider a firm that wants to program its production and investment policies to reach a given production rate and to maximize its future market competitiveness. The model which describes firm activities is studied in two different ways: using classical discretizations; and applying discrete versions of our result on time scales. In the end we compare the cost functional values obtained from those two approaches. The latter problem is more complex and relates to rate of inflation, p, and rate of unemployment, u, which inflict a social loss. Using known relations between p, u, and the expected rate of inflation π, we rewrite the social loss function as a function of π. We present this model in the time scale framework and find an optimal path π that minimizes the total social loss over a given time interval.