835 resultados para D12 - Consumer Economics: Empirical Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the success of his party systems theory, Giovanni Sartori’s predominant party system is a type that is consistently avoided by party systems scholars, yet the reasons for this have been unclear. This article exposes the flaws in Sartori’s predominant party system, but we also argue that it remains a useful concept and, consequently, that the literature’s rejection of predominance and retreat to the cruder dominance notion is unnecessary. Instead, we amend predominance to ensure its coherence within Sartori’s typology and consistency with his party systems theory. We show that our amendments improve the value of predominance as a category for empirical analysis of the effects of party systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Nature Information System (EUNIS) has been implemented for the establishment of a marine European habitats inventory. Its hierarchical classification is defined and relies on environmental variables which primarily constrain biological communities (e.g. substrate types, sea energy level, depth and light penetration). The EUNIS habitat classification scheme relies on thresholds (e.g. fraction of light and energy) which are based on expert judgment or on the empirical analysis of the above environmental data. The present paper proposes to establish and validate an appropriate threshold for energy classes (high, moderate and low) and for subtidal biological zonation (infralittoral and circalittoral) suitable for EUNIS habitat classification of the Western Iberian coast. Kineticwave-induced energy and the fraction of photosynthetically available light exerted on the marine bottom were respectively assigned to the presence of kelp (Saccorhiza polyschides, Laminaria hyperborea and Laminaria ochroleuca) and seaweed species in general. Both data were statistically described, ordered fromthe largest to the smallest and percentile analyseswere independently performed. The threshold between infralittoral and circalittoral was based on the first quartile while the ‘moderate energy’ class was established between the 12.5 and 87.5 percentiles. To avoid data dependence on sampling locations and assess the confidence interval a bootstrap technique was applied. According to this analysis,more than 75% of seaweeds are present at locations where more than 3.65% of the surface light reaches the sea bottom. The range of energy levels estimated using S. polyschides data, indicate that on the IberianWest coast the ‘moderate energy’ areas are between 0.00303 and 0.04385 N/m2 of wave-induced energy. The lack of agreement between different studies in different regions of Europe suggests the need for more standardization in the future. However, the obtained thresholds in the present study will be very useful in the near future to implement and establish the Iberian EUNIS habitats inventory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines firms' real decisions using a large panel of unquoted euro area firms over the period 2003-2011. To this end, this thesis is composed of five chapters in which three are the main empirical chapters. They assess the dimensions of firm behaviour across different specifications. Each of these chapters provide a detailed discussion on the contribution, theoretical and empirical background as well as the panel data techniques which are implemented. Chapter 1 describes the introduction and outline of the thesis. Chapter 2 presents an empirical analysis on the link between financial pressure and firms' employment level. In this set-up, it is explored the strength of financial pressure during the financial crisis. It is also tested whether this effect has a different impact for financially constrained and unconstrained firms in the periphery and non-periphery regions. The results of this chapter denote that financial pressure exerts a negative impact on firms' employment decisions and that this effect is stronger during the crisis for financially constrained firms in the periphery. Chapter 3 analyses the cash policies of private and public firms. Controlling for firm size and other standard variables in the literature of cash holdings, empirical findings suggest that private firms hold higher cash reserves than their public counterparts indicating a greater precautionary demand for cash by the former. The relative difference between these two type of firms decreases (increases) the higher (lower) is the the level of financial pressure. The findings are robust to various model specifications and over different sub-samples. Overall, this chapter shows the relevance of firms' size. Taken together, the findings of Chapter 3 are in line with the early literature on cash holdings and contradict the recent studies, which find that the precautionary motive to hold cash is less pronounced for private firms than for public ones. Chapter 4 undertakes an investigation on the relation between firms' stocks of inventories and trade credit (i.e. extended and taken) whilst controlling for the firms' size, the characteristics of the goods transacted, the recent financial crisis and the development of the banking system. The main findings provide evidence of a trade-off between trade credit extended and firms' stock of inventories. In other words, firms' prefer to extend credit in the form of stocks to their financially constrained customers to avoid holdings costly inventories and to increase their sales levels. The provision of trade credit by the firms also depends on the characteristics of the goods transacted. This impact is stronger during the crisis. Larger and liquid banking systems reduce the trade-off between the volume of stocks of inventories and the amount sold on credit. Trade credit taken is not affected by firms' stock of inventories. Chapter 5 presents the conclusions of the thesis. It provides the main contributions, implications and future research of each empirical chapter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo investigativo busca aportar a la literatura sobre las tácticas de influencia en el liderazgo. Surge como una aplicación, a dos casos específicos, del proyecto de investigación “Los mecanismos de influencia en la relación de liderazgo”, desarrollado por el profesor Juan Javier Saavedra Mayorga e inscrito en la línea de investigación en Estudios Organizacionales del Grupo de Investigación en Dirección y Gerencia. La investigación tiene como objetivo fundamental identificar las tácticas de influencia que utilizan dos líderes organizacionales en su trato cotidiano con sus colaboradores, así como la reacción de estos últimos ante dichas tácticas. El proyecto parte de una revisión teórica sobre tres elementos: el liderazgo, la influencia y el poder, y las reacciones de los colaboradores frente a las tácticas de influencia utilizadas por el líder. La estrategia metodológica empleada es el estudio de caso. El trabajo de campo se desarrolló en dos organizaciones: Microscopios y Equipos Especiales S.A.S. y Tecniespectro S.A.S. La técnica de recolección de información es la entrevista semi estructurada, y el método de análisis de información es el análisis de contenido temático.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this article, we investigate the pay-performance relationship of soccer players using individual data from eight seasons of the German soccer league Bundesliga. We find a nonlinear pay-performance relationship, indicating that salary does indeed affect individual performance. The results further show that player performance is affected not only by absolute income level but also by relative income position. An additional analysis of the performance impact of team effects provides evidence of a direct impact of team-mate attributes on individual player performance.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A review of the literature related to issues involved in irrigation induced agricultural development (IIAD) reveals that: (1) the magnitude, sensitivity and distribution of social welfare of IIAD is not fully analysed; (2) the impacts of excessive pesticide use on farmers’ health are not adequately explained; (3) no analysis estimates the relationship between farm level efficiency and overuse of agro-chemical inputs under imperfect markets; and (4) the method of incorporating groundwater extraction costs is misleading. This PhD thesis investigates these issues by using primary data, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarised as follows. First, the thesis demonstrates that Sri Lanka has gained a positive welfare change as a result of introducing new irrigation technology. The change in the consumer surplus is Rs.48,236 million, while the change in the producer surplus is Rs. 14,274 millions between 1970 and 2006. The results also show that the long run benefits and costs of IIAD depend critically on the magnitude of the expansion of the irrigated area, as well as the competition faced by traditional farmers (agricultural crowding out effects). The traditional sector’s ability to compete with the modern sector depends on productivity improvements, reducing production costs and future structural changes (spillover effects). Second, the thesis findings on pesticides used for agriculture show that, on average, a farmer incurs a cost of approximately Rs. 590 to 800 per month during a typical cultivation period due to exposure to pesticides. It is shown that the value of average loss in earnings per farmer for the ‘hospitalised’ sample is Rs. 475 per month, while it is approximately Rs. 345 per month for the ‘general’ farmers group during a typical cultivation season. However, the average willingness to pay (WTP) to avoid exposure to pesticides is approximately Rs. 950 and Rs. 620 for ‘hospitalised’ and ‘general’ farmers’ samples respectively. The estimated percentage contribution for WTP due to health costs, lost earnings, mitigating expenditure, and disutility are 29, 50, 5 and 16 per cent respectively for hospitalised farmers, while they are 32, 55, 8 and 5 per cent respectively for ‘general’ farmers. It is also shown that given market imperfections for most agricultural inputs, farmers are overusing pesticides with the expectation of higher future returns. This has led to an increase in inefficiency in farming practices which is not understood by the farmers. Third, it is found that various groundwater depletion studies in the economics literature have provided misleading optimal water extraction quantity levels. This is due to a failure to incorporate all production costs in the relevant models. It is only by incorporating quality changes to quantity deterioration, that it is possible to derive socially optimal levels. Empirical results clearly show that the benefits per hectare per month considering both the avoidance costs of deepening agro-wells by five feet from the existing average, as well as the avoidance costs of maintaining the water salinity level at 1.8 (mmhos/Cm), is approximately Rs. 4,350 for farmers in the Anuradhapura district and Rs. 5,600 for farmers in the Matale district.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Developers and policy makers are consistently at odds over the debate as to whether impact fees increase house prices. This debate continues despite the extensive body of theoretical and empirical international literature that discusses the passing on to home buyers of impact fees, and the corresponding increase to housing prices. In attempting to quantify this impact, over a dozen empirical studies have been carried out in the US and Canada since the 1980’s. However the methodologies used vary greatly, as do the results. Despite similar infrastructure funding policies in numerous developed countries, no such empirical works exist outside of the US/Canada. The purpose of this research is to analyse the existing econometric models in order to identify, compare and contrast the theoretical bases, methodologies, key assumptions and findings of each. This research will assist in identifying if further model development is required and/or whether any of these models have external validity and are readily transferable outside of the US. The findings conclude that there is very little explicit rationale behind the various model selections and that significant model deficiencies appear still to exist.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The method of generalized estimating equations (GEE) is a popular tool for analysing longitudinal (panel) data. Often, the covariates collected are time-dependent in nature, for example, age, relapse status, monthly income. When using GEE to analyse longitudinal data with time-dependent covariates, crucial assumptions about the covariates are necessary for valid inferences to be drawn. When those assumptions do not hold or cannot be verified, Pepe and Anderson (1994, Communications in Statistics, Simulations and Computation 23, 939–951) advocated using an independence working correlation assumption in the GEE model as a robust approach. However, using GEE with the independence correlation assumption may lead to significant efficiency loss (Fitzmaurice, 1995, Biometrics 51, 309–317). In this article, we propose a method that extracts additional information from the estimating equations that are excluded by the independence assumption. The method always includes the estimating equations under the independence assumption and the contribution from the remaining estimating equations is weighted according to the likelihood of each equation being a consistent estimating equation and the information it carries. We apply the method to a longitudinal study of the health of a group of Filipino children.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Airport efficiency is important because it has a direct impact on customer safety and satisfaction and therefore the financial performance and sustainability of airports, airlines, and affiliated service providers. This is especially so in a world characterized by an increasing volume of both domestic and international air travel, price and other forms of competition between rival airports, airport hubs and airlines, and rapid and sometimes unexpected changes in airline routes and carriers. It also reflects expansion in the number of airports handling regional, national, and international traffic and the growth of complementary airport facilities including industrial, commercial, and retail premises. This has fostered a steadily increasing volume of research aimed at modeling and providing best-practice measures and estimates of airport efficiency using mathematical and econometric frontiers. The purpose of this chapter is to review these various methods as they apply to airports throughout the world. Apart from discussing the strengths and weaknesses of the different approaches and their key findings, the paper also examines the steps faced by researchers as they move through the modeling process in defining airport inputs and outputs and the purported efficiency drivers. Accordingly, the chapter provides guidance to those conducting empirical research on airport efficiency and serves as an aid for aviation regulators and airport operators among others interpreting airport efficiency research outcomes.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This study analyzes the management of air pollutant substance in Chinese industrial sectors from 1998 to 2009. Decomposition analysis applying the logarithmic mean divisia index is used to analyze changes in emissions of air pollutants with a focus on the following five factors: coal pollution intensity (CPI), end-of-pipe treatment (EOP), the energy mix (EM), productive efficiency change (EFF), and production scale changes (PSC). Three pollutants are the main focus of this study: sulfur dioxide (SO2), dust, and soot. The novelty of this paper is focusing on the impact of the elimination policy on air pollution management in China by type of industry using the scale merit effect for pollution abatement technology change. First, the increase in SO2 emissions from Chinese industrial sectors because of the increase in the production scale is demonstrated. However, the EOP equipment that induced this change and improvements in energy efficiency has prevented an increase in SO2 emissions that is commensurate with the increase in production. Second, soot emissions were successfully reduced and controlled in all industries except the steel industry between 1998 and 2009, even though the production scale expanded for these industries. This reduction was achieved through improvements in EOP technology and in energy efficiency. Dust emissions decreased by nearly 65% between 1998 and 2009 in the Chinese industrial sectors. This successful reduction in emissions was achieved by implementing EOP technology and pollution prevention activities during the production processes, especially in the cement industry. Finally, pollution prevention in the cement industry is shown to result from production technology development rather than scale merit. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Seventy percent of the world's catch of fish and fishery products is consumed as food. Fish and shellfish products represent 15.6 percent of animal protein supply and 5.6 percent of total protein supply on a worldwide basis. Developing countries account for almost 50 percent of global fish exports. Seafood-borne disease or illness outbreaks affect consumers both physically and financially, and create regulatory problems for both importing and exporting countries. Seafood safety as a commodity cannot be purchased in the marketplace and government intervenes to regulate the safety and quality of seafood. Theoretical issues and data limitations create problems in estimating what consumers will pay for seafood safety and quality. The costs and benefits of seafood safety must be considered at all levels, including the fishers, fish farmers, input suppliers to fishing, processing and trade, seafood processors, seafood distributors, consumers and government. Hazard Analysis Critical Control Point (HACCP) programmes are being implemented on a worldwide basis for seafood. Studies have been completed to estimate the cost of HACCP in various shrimp, fish and shellfish plants in the United States, and are underway for some seafood plants in the United Kingdom, Canada and Africa. Major developments within the last two decades have created a set of complex trading situations for seafood. Current events indicate that seafood safety and quality can be used as non-tariff barriers to free trade. Research priorities necessary to estimate the economic value and impacts of achieving safer seafood are outlined at the consumer, seafood production and processing, trade and government levels. An extensive list of references on the economics of seafood safety and quality is presented. (PDF contains 56 pages; captured from html.)

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.