18 resultados para market analyses
em Duke University
Resumo:
Recent empirical findings suggest that the long-run dependence in U.S. stock market volatility is best described by a slowly mean-reverting fractionally integrated process. The present study complements this existing time-series-based evidence by comparing the risk-neutralized option pricing distributions from various ARCH-type formulations. Utilizing a panel data set consisting of newly created exchange traded long-term equity anticipation securities, or leaps, on the Standard and Poor's 500 stock market index with maturity times ranging up to three years, we find that the degree of mean reversion in the volatility process implicit in these prices is best described by a Fractionally Integrated EGARCH (FIEGARCH) model. © 1999 Elsevier Science S.A. All rights reserved.
Resumo:
Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.
Resumo:
Consistent with the implications from a simple asymmetric information model for the bid-ask spread, we present empirical evidence that the size of the bid-ask spread in the foreign exchange market is positively related to the underlying exchange rate uncertainty. The estimation results are based on an ordered probit analysis that captures the discreteness in the spread distribution, with the uncertainty of the spot exchange rate being quantified through a GARCH type model. The data sets consists of more than 300,000 continuously recorded Deutschemark/dollar quotes over the period from April 1989 to June 1989. © 1994.
Resumo:
Although it has recently been shown that A/J mice are highly susceptible to Staphylococcus aureus sepsis as compared to C57BL/6J, the specific genes responsible for this differential phenotype are unknown. Using chromosome substitution strains (CSS), we found that loci on chromosomes 8, 11, and 18 influence susceptibility to S. aureus sepsis in A/J mice. We then used two candidate gene selection strategies to identify genes on these three chromosomes associated with S. aureus susceptibility, and targeted genes identified by both gene selection strategies. First, we used whole genome transcription profiling to identify 191 (56 on chr. 8, 100 on chr. 11, and 35 on chr. 18) genes on our three chromosomes of interest that are differentially expressed between S. aureus-infected A/J and C57BL/6J. Second, we identified two significant quantitative trait loci (QTL) for survival post-infection on chr. 18 using N(2) backcross mice (F(1) [C18A]xC57BL/6J). Ten genes on chr. 18 (March3, Cep120, Chmp1b, Dcp2, Dtwd2, Isoc1, Lman1, Spire1, Tnfaip8, and Seh1l) mapped to the two significant QTL regions and were also identified by the expression array selection strategy. Using real-time PCR, 6 of these 10 genes (Chmp1b, Dtwd2, Isoc1, Lman1, Tnfaip8, and Seh1l) showed significantly different expression levels between S. aureus-infected A/J and C57BL/6J. For two (Tnfaip8 and Seh1l) of these 6 genes, siRNA-mediated knockdown of gene expression in S. aureus-challenged RAW264.7 macrophages induced significant changes in the cytokine response (IL-1 beta and GM-CSF) compared to negative controls. These cytokine response changes were consistent with those seen in S. aureus-challenged peritoneal macrophages from CSS 18 mice (which contain A/J chromosome 18 but are otherwise C57BL/6J), but not C57BL/6J mice. These findings suggest that two genes, Tnfaip8 and Seh1l, may contribute to susceptibility to S. aureus in A/J mice, and represent promising candidates for human genetic susceptibility studies.
Resumo:
BACKGROUND: West Virginia has the worst oral health in the United States, but the reasons for this are unclear. This pilot study explored the etiology of this disparity using culture-independent analyses to identify bacterial species associated with oral disease. METHODS: Bacteria in subgingival plaque samples from twelve participants in two independent West Virginia dental-related studies were characterized using 16S rRNA gene sequencing and Human Oral Microbe Identification Microarray (HOMIM) analysis. Unifrac analysis was used to characterize phylogenetic differences between bacterial communities obtained from plaque of participants with low or high oral disease, which was further evaluated using clustering and Principal Coordinate Analysis. RESULTS: Statistically different bacterial signatures (P<0.001) were identified in subgingival plaque of individuals with low or high oral disease in West Virginia based on 16S rRNA gene sequencing. Low disease contained a high frequency of Veillonella and Streptococcus, with a moderate number of Capnocytophaga. High disease exhibited substantially increased bacterial diversity and included a large proportion of Clostridiales cluster bacteria (Selenomonas, Eubacterium, Dialister). Phylogenetic trees constructed using 16S rRNA gene sequencing revealed that Clostridiales were repeated colonizers in plaque associated with high oral disease, providing evidence that the oral environment is somehow influencing the bacterial signature linked to disease. CONCLUSIONS: Culture-independent analyses identified an atypical bacterial signature associated with high oral disease in West Virginians and provided evidence that the oral environment influenced this signature. Both findings provide insight into the etiology of the oral disparity in West Virginia.
Resumo:
We examined trends in the introduction of new chemical entities (NCEs) worldwide from 1982 through 2003. Although annual introductions of NCEs decreased over time, introductions of high-quality NCEs (that is, global and first-in-class NCEs) increased moderately. Both biotech and orphan products enjoyed tremendous growth, especially for cancer treatment. Country-level analyses for 1993-2003 indicate that U.S. firms overtook their European counterparts in innovative performance or the introduction of first-in-class, biotech, and orphan products. The United States also became the leading market for first launch.
Resumo:
Policy makers and analysts are often faced with situations where it is unclear whether market-based instruments hold real promise of reducing costs, relative to conventional uniform standards. We develop analytic expressions that can be employed with modest amounts of information to estimate the potential cost savings associated with market-based policies, with an application to the environmental policy realm. These simple formulae can identify instruments that merit more detailed investigation. We illustrate the use of these results with an application to nitrogen oxides control by electric utilities in the United States.
Resumo:
Given the increases in spatial resolution and other improvements in climate modeling capabilities over the last decade since the CMIP3 simulations were completed, CMIP5 provides a unique opportunity to assess scientific understanding of climate variability and change over a range of historical and future conditions. With participation from over 20 modeling groups and more than 40 global models, CMIP5 represents the latest and most ambitious coordinated international climate model intercomparison exercise to date. Observations dating back to 1900 show that the temperatures in the twenty-first century have the largest spatial extent of record breaking and much above normal mean monthly maximum and minimum temperatures. The 20-yr return value of the annual maximum or minimum daily temperature is one measure of changes in rare temperature extremes.
Resumo:
To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.
Resumo:
If and only if each single cue uniquely defines its target, a independence model based on fragment theory can predict the strength of a combined dual cue from the strengths of its single cue components. If the single cues do not each uniquely define their target, no single monotonic function can predict the strength of the dual cue from its components; rather, what matters is the number of possible targets. The probability of generating a target word was .19 for rhyme cues, .14 for category cues, and .97 for rhyme-plus-category dual cues. Moreover, some pairs of cues had probabilities of producing their targets of .03 when used individually and 1.00 when used together, whereas other pairs had moderate probabilities individually and together. The results, which are interpreted in terms of multiple constraints limiting the number of responses, show why rhymes, which play a minimal role in laboratory studies of memory, are common in real-world mnemonics.
Resumo:
Economic analyses of climate change policies frequently focus on reductions of energy-related carbon dioxide emissions via market-based, economy-wide policies. The current course of environment and energy policy debate in the United States, however, suggests an alternative outcome: sector-based and/or inefficiently designed policies. This paper uses a collection of specialized, sector-based models in conjunction with a computable general equilibrium model of the economy to examine and compare these policies at an aggregate level. We examine the relative cost of different policies designed to achieve the same quantity of emission reductions. We find that excluding a limited number of sectors from an economy-wide policy does not significantly raise costs. Focusing policy solely on the electricity and transportation sectors doubles costs, however, and using non-market policies can raise cost by a factor of ten. These results are driven in part by, and are sensitive to, our modeling of pre-existing tax distortions. Copyright © 2006 by the IAEE. All rights reserved.
Resumo:
Market failures associated with environmental pollution interact with market failures associated with the innovation and diffusion of new technologies. These combined market failures provide a strong rationale for a portfolio of public policies that foster emissions reduction as well as the development and adoption of environmentally beneficial technology. Both theory and empirical evidence suggest that the rate and direction of technological advance is influenced by market and regulatory incentives, and can be cost-effectively harnessed through the use of economic-incentive based policy. In the presence of weak or nonexistent environmental policies, investments in the development and diffusion of new environmentally beneficial technologies are very likely to be less than would be socially desirable. Positive knowledge and adoption spillovers and information problems can further weaken innovation incentives. While environmental technology policy is fraught with difficulties, a long-term view suggests a strategy of experimenting with policy approaches and systematically evaluating their success. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.