17 resultados para computational costs

em Duke University


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many consumer durable retailers often do not advertise their prices and instead ask consumers to call them for prices. It is easy to see that this practice increases the consumers' cost of learning the prices of products they are considering, yet firms commonly use such practices. Not advertising prices may reduce the firm's advertising costs, but the strategic effects of doing so are not clear. Our objective is to examine the strategic effects of this practice. In particular, how does making price discovery more difficult for consumers affect competing retailers' price, service decisions, and profits? We develop a model in which a manufacturer sells its product through a high-service retailer and a low-service retailer. Consumers can purchase the retail service at the high-end retailer and purchase the product at the competing low-end retailer. Therefore, the high-end retailer faces a free-riding problem. A retailer first chooses its optimal service levels. Then, it chooses its optimal price levels. Finally, a retailer decides whether to advertise its prices. The model results in four structures: (1) both retailers advertise prices, (2) only the low-service retailer advertises price, (3) only the high-service retailer advertises price, and (4) neither retailer advertises price. We find that when a retailer does not advertise its price and makes price discovery more difficult for consumers, the competition between the retailers is less intense. However, the retailer is forced to charge a lower price. In addition, if the competing retailer does advertise its prices, then the competing retailer enjoys higher profit margins. We identify conditions under which each of the above four structures is an equilibrium and show that a low-service retailer not advertising its price is a more likely outcome than a high-service retailer doing so. We then solve the manufacturer's problem and find that there are several instances when a retailer's advertising decisions are different from what the manufacturer would want. We describe the nature of this channel coordination problem and identify some solutions. © 2010 INFORMS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research and development costs of 68 randomly selected new drugs were obtained from a survey of 10 pharmaceutical firms. These data were used to estimate the average pre-tax cost of new drug development. The costs of compounds abandoned during testing were linked to the costs of compounds that obtained marketing approval. The estimated average out-of-pocket cost per new drug is 403 million US dollars (2000 dollars). Capitalizing out-of-pocket costs to the point of marketing approval at a real discount rate of 11% yields a total pre-approval cost estimate of 802 million US dollars (2000 dollars). When compared to the results of an earlier study with a similar methodology, total capitalized costs were shown to have increased at an annual rate of 7.4% above general price inflation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

© 2012 by Oxford University Press. All rights reserved.This article reviews the extensive literature on R&D costs and returns. The first section focuses on R&D costs and the various factors that have affected the trends in real R&D costs over time. The second section considers economic studies on the distribution of returns in pharmaceuticals for different cohorts of new drug introductions. It also reviews the use of these studies to analyze the impact of policy actions on R&D costs and returns. The final section concludes and discusses open questions for further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We estimate a carbon mitigation cost curve for the U.S. commercial sector based on econometric estimation of the responsiveness of fuel demand and equipment choices to energy price changes. The model econometrically estimates fuel demand conditional on fuel choice, which is characterized by a multinomial logit model. Separate estimation of end uses (e.g., heating, cooking) using the U.S. Commercial Buildings Energy Consumption Survey allows for exceptionally detailed estimation of price responsiveness disaggregated by end use and fuel type. We then construct aggregate long-run elasticities, by fuel type, through a series of simulations; own-price elasticities range from -0.9 for district heat services to -2.9 for fuel oil. The simulations form the basis of a marginal cost curve for carbon mitigation, which suggests that a price of $20 per ton of carbon would result in an 8% reduction in commercial carbon emissions, and a price of $100 per ton would result in a 28% reduction. © 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a comprehensive study of the binary systems of the platinum-group metals with the transition metals, using high-throughput first-principles calculations. These computations predict stability of new compounds in 28 binary systems where no compounds have been reported in the literature experimentally and a few dozen of as-yet unreported compounds in additional systems. Our calculations also identify stable structures at compound compositions that have been previously reported without detailed structural data and indicate that some experimentally reported compounds may actually be unstable at low temperatures. With these results, we construct enhanced structure maps for the binary alloys of platinum-group metals. These maps are much more complete, systematic, and predictive than those based on empirical results alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proteins are essential components of cells and are crucial for catalyzing reactions, signaling, recognition, motility, recycling, and structural stability. This diversity of function suggests that nature is only scratching the surface of protein functional space. Protein function is determined by structure, which in turn is determined predominantly by amino acid sequence. Protein design aims to explore protein sequence and conformational space to design novel proteins with new or improved function. The vast number of possible protein sequences makes exploring the space a challenging problem.

Computational structure-based protein design (CSPD) allows for the rational design of proteins. Because of the large search space, CSPD methods must balance search accuracy and modeling simplifications. We have developed algorithms that allow for the accurate and efficient search of protein conformational space. Specifically, we focus on algorithms that maintain provability, account for protein flexibility, and use ensemble-based rankings. We present several novel algorithms for incorporating improved flexibility into CSPD with continuous rotamers. We applied these algorithms to two biomedically important design problems. We designed peptide inhibitors of the cystic fibrosis agonist CAL that were able to restore function of the vital cystic fibrosis protein CFTR. We also designed improved HIV antibodies and nanobodies to combat HIV infections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We conduct the first empirical investigation of common-pool resource users' dynamic and strategic behavior at the micro level using real-world data. Fishermen's strategies in a fully dynamic game account for latent resource dynamics and other players' actions, revealing the profit structure of the fishery. We compare the fishermen's actual and socially optimal exploitation paths under a time-specific vessel allocation policy and find a sizable dynamic externality. Individual fishermen respond to other users by exerting effort above the optimal level early in the season. Congestion is costly instantaneously but is beneficial in the long run because it partially offsets dynamic inefficiencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effectiveness of vaccinating males against the human papillomavirus (HPV) remains a controversial subject. Many existing studies conclude that increasing female coverage is more effective than diverting resources into male vaccination. Recently, several empirical studies on HPV immunization have been published, providing evidence of the fact that marginal vaccination costs increase with coverage. In this study, we use a stochastic agent-based modeling framework to revisit the male vaccination debate in light of these new findings. Within this framework, we assess the impact of coverage-dependent marginal costs of vaccine distribution on optimal immunization strategies against HPV. Focusing on the two scenarios of ongoing and new vaccination programs, we analyze different resource allocation policies and their effects on overall disease burden. Our results suggest that if the costs associated with vaccinating males are relatively close to those associated with vaccinating females, then coverage-dependent, increasing marginal costs may favor vaccination strategies that entail immunization of both genders. In particular, this study emphasizes the necessity for further empirical research on the nature of coverage-dependent vaccination costs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The possibility of encouraging the growth of forests as a means of sequestering carbon dioxide has received considerable attention, partly because of evidence that this can be a relatively inexpensive means of combating climate change. But how sensitive are such estimates to specific conditions? We examine the sensitivity of carbon sequestration costs to changes in critical factors, including the nature of management and deforestation regimes, silvicultural species, relative prices, and discount rates. (C) 2000 Academic Press.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Fitness costs and slower disease progression are associated with a cytolytic T lymphocyte (CTL) escape mutation T242N in Gag in HIV-1-infected individuals carrying HLA-B*57/5801 alleles. However, the impact of different context in diverse HIV-1 strains on the fitness costs due to the T242N mutation has not been well characterized. To better understand the extent of fitness costs of the T242N mutation and the repair of fitness loss through compensatory amino acids, we investigated its fitness impact in different transmitted/founder (T/F) viruses. RESULTS: The T242N mutation resulted in various levels of fitness loss in four different T/F viruses. However, the fitness costs were significantly compromised by preexisting compensatory amino acids in (Isoleucine at position 247) or outside (glutamine at position 219) the CTL epitope. Moreover, the transmitted T242N escape mutant in subject CH131 was as fit as the revertant N242T mutant and the elimination of the compensatory amino acid I247 in the T/F viral genome resulted in significant fitness cost, suggesting the fitness loss caused by the T242N mutation had been fully repaired in the donor at transmission. Analysis of the global circulating HIV-1 sequences in the Los Alamos HIV Sequence Database showed a high prevalence of compensatory amino acids for the T242N mutation and other T cell escape mutations. CONCLUSIONS: Our results show that the preexisting compensatory amino acids in the majority of circulating HIV-1 strains could significantly compromise the fitness loss due to CTL escape mutations and thus increase challenges for T cell based vaccines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With increasing recognition of the roles RNA molecules and RNA/protein complexes play in an unexpected variety of biological processes, understanding of RNA structure-function relationships is of high current importance. To make clean biological interpretations from three-dimensional structures, it is imperative to have high-quality, accurate RNA crystal structures available, and the community has thoroughly embraced that goal. However, due to the many degrees of freedom inherent in RNA structure (especially for the backbone), it is a significant challenge to succeed in building accurate experimental models for RNA structures. This chapter describes the tools and techniques our research group and our collaborators have developed over the years to help RNA structural biologists both evaluate and achieve better accuracy. Expert analysis of large, high-resolution, quality-conscious RNA datasets provides the fundamental information that enables automated methods for robust and efficient error diagnosis in validating RNA structures at all resolutions. The even more crucial goal of correcting the diagnosed outliers has steadily developed toward highly effective, computationally based techniques. Automation enables solving complex issues in large RNA structures, but cannot circumvent the need for thoughtful examination of local details, and so we also provide some guidance for interpreting and acting on the results of current structure validation for RNA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.

We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.

We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.

Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.

This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.