5 resultados para competitors

em Duke University


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Patents for several blockbuster biological products are expected to expire soon. The Food and Drug Administration is examining whether biologies can and should be treated like pharmaceuticals with regard to generics. In contrast with pharmaceuticals, which are manufactured through chemical synthesis, biologies are manufactured through fermentation, a process that is more variable and costly. Regulators might require extensive clinical testing of generic biologies to demonstrate equivalence to the branded product. The focus of the debate on generic biologies has been on legal and health concerns, but there are important economic implications. We combine a theoretical model of generic biologies with regression estimates from generic pharmaceuticals to estimate market entry and prices in the generic biologic market. We find that generic biologies will have high fixed costs from clinical testing and from manufacturing, so there will be less entry than would be expected for generic pharmaceuticals. With fewer generic competitors, generic biologies will be relatively close in price to branded biologies. Policy makers should be prudent in estimating financial benefits of generic biologies for consumers and payers. We also examine possible government strategies to promote generic competition. Copyright © 2007 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context can have a powerful influence on decision-making strategies in humans. In particular, people sometimes shift their economic preferences depending on the broader social context, such as the presence of potential competitors or mating partners. Despite the important role of competition in primate conspecific interactions, as well as evidence that competitive social contexts impact primates' social cognitive skills, there has been little study of how social context influences the strategies that nonhumans show when making decisions about the value of resources. Here we investigate the impact of social context on preferences for risk (variability in payoffs) in our two closest phylogenetic relatives, chimpanzees, Pan troglodytes, and bonobos, Pan paniscus. In a first study, we examine the impact of competition on patterns of risky choice. In a second study, we examine whether a positive play context affects risky choices. We find that (1) apes are more likely to choose the risky option when making decisions in a competitive context; and (2) the play context did not influence their risk preferences. Overall these results suggest that some types of social contexts can shift patterns of decision making in nonhuman apes, much like in humans. Comparative studies of chimpanzees and bonobos can therefore help illuminate the evolutionary processes shaping human economic behaviour. © 2012 The Association for the Study of Animal Behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in technology, communication, and transportation over the past thirty years have led to tighter linkages and enhanced collaboration across traditional borders between nations, institutions, and cultures. This thesis uses the furniture industry as a lens to examine the impacts of globalization on individual countries and companies as they interact on an international scale. Using global value chain analysis and international trade data, I break down the furniture production process and explore how countries have specialized in particular stages of production to differentiate themselves from competitors and maximize the benefits of global involvement. Through interviews with company representatives and evaluation of branding strategies such as advertisements, webpages, and partnerships, I investigate across four country cases how furniture companies construct strong brands in an effort to stand out as unique to consumers with access to products made around the globe. Branding often serves to highlight distinctiveness and associate companies with national identities, thus revealing that in today’s globalized and interconnected society, local differences and diversity are more significant than ever.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.

Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.

Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.

Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.