14 resultados para Market Sensing

em Duke University


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent empirical findings suggest that the long-run dependence in U.S. stock market volatility is best described by a slowly mean-reverting fractionally integrated process. The present study complements this existing time-series-based evidence by comparing the risk-neutralized option pricing distributions from various ARCH-type formulations. Utilizing a panel data set consisting of newly created exchange traded long-term equity anticipation securities, or leaps, on the Standard and Poor's 500 stock market index with maturity times ranging up to three years, we find that the degree of mean reversion in the volatility process implicit in these prices is best described by a Fractionally Integrated EGARCH (FIEGARCH) model. © 1999 Elsevier Science S.A. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consistent with the implications from a simple asymmetric information model for the bid-ask spread, we present empirical evidence that the size of the bid-ask spread in the foreign exchange market is positively related to the underlying exchange rate uncertainty. The estimation results are based on an ordered probit analysis that captures the discreteness in the spread distribution, with the uncertainty of the spot exchange rate being quantified through a GARCH type model. The data sets consists of more than 300,000 continuously recorded Deutschemark/dollar quotes over the period from April 1989 to June 1989. © 1994.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors explore nanoscale sensor processor (nSP) architectures. Their design includes a simple accumulator-based instruction-set architecture, sensors, limited memory, and instruction-fused sensing. Using nSP technology based on optical resonance energy transfer logic helps them decrease the design's size; their smallest design is about the size of the largest-known virus. © 2006 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study involves two aspects of our investigations of plasmonics-active systems: (i) theoretical and simulation studies and (ii) experimental fabrication of plasmonics-active nanostructures. Two types of nanostructures are selected as the model systems for their unique plasmonics properties: (1) nanoparticles and (2) nanowires on substrate. Special focus is devoted to regions where the electromagnetic field is strongly concentrated by the metallic nanostructures or between nanostructures. The theoretical investigations deal with dimers of nanoparticles and nanoshells using a semi-analytical method based on a multipole expansion (ME) and the finite-element method (FEM) in order to determine the electromagnetic enhancement, especially at the interface areas of two adjacent nanoparticles. The experimental study involves the design of plasmonics-active nanowire arrays on substrates that can provide efficient electromagnetic enhancement in regions around and between the nanostructures. Fabrication of these nanowire structures over large chip-scale areas (from a few millimeters to a few centimeters) as well as FDTD simulations to estimate the EM fields between the nanowires are described. The application of these nanowire chips using surface-enhanced Raman scattering (SERS) for detection of chemicals and labeled DNA molecules is described to illustrate the potential of the plasmonics chips for sensing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Light is a universal signal perceived by organisms, including fungi, in which light regulates common and unique biological processes depending on the species. Previous research has established that conserved proteins, originally called White collar 1 and 2 from the ascomycete Neurospora crassa, regulate UV/blue light sensing. Homologous proteins function in distant relatives of N. crassa, including the basidiomycetes and zygomycetes, which diverged as long as a billion years ago. Here we conducted microarray experiments on the basidiomycete fungus Cryptococcus neoformans to identify light-regulated genes. Surprisingly, only a single gene was induced by light above the commonly used twofold threshold. This gene, HEM15, is predicted to encode a ferrochelatase that catalyses the final step in haem biosynthesis from highly photoreactive porphyrins. The C. neoformans gene complements a Saccharomyces cerevisiae hem15Delta strain and is essential for viability, and the Hem15 protein localizes to mitochondria, three lines of evidence that the gene encodes ferrochelatase. Regulation of HEM15 by light suggests a mechanism by which bwc1/bwc2 mutants are photosensitive and exhibit reduced virulence. We show that ferrochelatase is also light-regulated in a white collar-dependent fashion in N. crassa and the zygomycete Phycomyces blakesleeanus, indicating that ferrochelatase is an ancient target of photoregulation in the fungal kingdom.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous studies have shown that the isoplanatic distortion due to turbulence and the image of a remote object may be jointly estimated from the 4D mutual intensity across an aperture. This Letter shows that decompressive inference on a 2D slice of the 4D mutual intensity, as measured by a rotational shear interferometer, is sufficient for estimation of sparse objects imaged through turbulence. The 2D slice is processed using an iterative algorithm that alternates between estimating the sparse objects and estimating the turbulence-induced phase screen. This approach may enable new systems that infer object properties through turbulence without exhaustive sampling of coherence functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrologic research is a very demanding application of fiber-optic distributed temperature sensing (DTS) in terms of precision, accuracy and calibration. The physics behind the most frequently used DTS instruments are considered as they apply to four calibration methods for single-ended DTS installations. The new methods presented are more accurate than the instrument-calibrated data, achieving accuracies on the order of tenths of a degree root mean square error (RMSE) and mean bias. Effects of localized non-uniformities that violate the assumptions of single-ended calibration data are explored and quantified. Experimental design considerations such as selection of integration times or selection of the length of the reference sections are discussed, and the impacts of these considerations on calibrated temperatures are explored in two case studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Policy makers and analysts are often faced with situations where it is unclear whether market-based instruments hold real promise of reducing costs, relative to conventional uniform standards. We develop analytic expressions that can be employed with modest amounts of information to estimate the potential cost savings associated with market-based policies, with an application to the environmental policy realm. These simple formulae can identify instruments that merit more detailed investigation. We illustrate the use of these results with an application to nitrogen oxides control by electric utilities in the United States.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A framework for adaptive and non-adaptive statistical compressive sensing is developed, where a statistical model replaces the standard sparsity model of classical compressive sensing. We propose within this framework optimal task-specific sensing protocols specifically and jointly designed for classification and reconstruction. A two-step adaptive sensing paradigm is developed, where online sensing is applied to detect the signal class in the first step, followed by a reconstruction step adapted to the detected class and the observed samples. The approach is based on information theory, here tailored for Gaussian mixture models (GMMs), where an information-theoretic objective relationship between the sensed signals and a representation of the specific task of interest is maximized. Experimental results using synthetic signals, Landsat satellite attributes, and natural images of different sizes and with different noise levels show the improvements achieved using the proposed framework when compared to more standard sensing protocols. The underlying formulation can be applied beyond GMMs, at the price of higher mathematical and computational complexity. © 1991-2012 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Market failures associated with environmental pollution interact with market failures associated with the innovation and diffusion of new technologies. These combined market failures provide a strong rationale for a portfolio of public policies that foster emissions reduction as well as the development and adoption of environmentally beneficial technology. Both theory and empirical evidence suggest that the rate and direction of technological advance is influenced by market and regulatory incentives, and can be cost-effectively harnessed through the use of economic-incentive based policy. In the presence of weak or nonexistent environmental policies, investments in the development and diffusion of new environmentally beneficial technologies are very likely to be less than would be socially desirable. Positive knowledge and adoption spillovers and information problems can further weaken innovation incentives. While environmental technology policy is fraught with difficulties, a long-term view suggests a strategy of experimenting with policy approaches and systematically evaluating their success. © 2005 Elsevier B.V. All rights reserved.