938 resultados para implied volatility function models
Resumo:
In applied work economists often seek to relate a given response variable y to some causal parameter mu* associated with it. This parameter usually represents a summarization based on some explanatory variables of the distribution of y, such as a regression function, and treating it as a conditional expectation is central to its identification and estimation. However, the interpretation of mu* as a conditional expectation breaks down if some or all of the explanatory variables are endogenous. This is not a problem when mu* is modelled as a parametric function of explanatory variables because it is well known how instrumental variables techniques can be used to identify and estimate mu*. In contrast, handling endogenous regressors in nonparametric models, where mu* is regarded as fully unknown, presents di±cult theoretical and practical challenges. In this paper we consider an endogenous nonparametric model based on a conditional moment restriction. We investigate identification related properties of this model when the unknown function mu* belongs to a linear space. We also investigate underidentification of mu* along with the identification of its linear functionals. Several examples are provided in order to develop intuition about identification and estimation for endogenous nonparametric regression and related models.
Resumo:
Lovell and Rouse (LR) have recently proposed a modification of the standard DEA model that overcomes the infeasibility problem often encountered in computing super-efficiency. In the LR procedure one appropriately scales up the observed input vector (scale down the output vector) of the relevant super-efficient firm thereby usually creating its inefficient surrogate. An alternative procedure proposed in this paper uses the directional distance function introduced by Chambers, Chung, and Färe and the resulting Nerlove-Luenberger (NL) measure of super-efficiency. The fact that the directional distance function combines features of both an input-oriented and an output-oriented model, generally leads to a more complete ranking of the observations than either of the oriented models. An added advantage of this approach is that the NL super-efficiency measure is unique and does not depend on any arbitrary choice of a scaling parameter. A data set on international airlines from Coelli, Perelman, and Griffel-Tatje (2002) is utilized in an illustrative empirical application.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the central bank should adopt a loss function that differs from the social loss function. Carefully designing the central bank s loss function with consistent targets can harmonize optimal and consistent policy. This desirable result emerges from two observations. First, the social loss function reflects a normative process that does not necessarily prove consistent with the structure of the microeconomy. Thus, the social loss function cannot serve as a direct loss function for the central bank. Second, an optimal loss function for the central bank must depend on the structure of that microeconomy. In addition, this paper shows that control theory provides a benchmark for institution design in a game-theoretical framework.
Resumo:
Consider a nonparametric regression model Y=mu*(X) + e, where the explanatory variables X are endogenous and e satisfies the conditional moment restriction E[e|W]=0 w.p.1 for instrumental variables W. It is well known that in these models the structural parameter mu* is 'ill-posed' in the sense that the function mapping the data to mu* is not continuous. In this paper, we derive the efficiency bounds for estimating linear functionals E[p(X)mu*(X)] and int_{supp(X)}p(x)mu*(x)dx, where p is a known weight function and supp(X) the support of X, without assuming mu* to be well-posed or even identified.
Resumo:
A problem frequently encountered in Data Envelopment Analysis (DEA) is that the total number of inputs and outputs included tend to be too many relative to the sample size. One way to counter this problem is to combine several inputs (or outputs) into (meaningful) aggregate variables reducing thereby the dimension of the input (or output) vector. A direct effect of input aggregation is to reduce the number of constraints. This, in its turn, alters the optimal value of the objective function. In this paper, we show how a statistical test proposed by Banker (1993) may be applied to test the validity of a specific way of aggregating several inputs. An empirical application using data from Indian manufacturing for the year 2002-03 is included as an example of the proposed test.
Resumo:
Dynein light chain 1 (DLC1) is a highly conserved and ubiquitously expressed protein which might have critical cellular function as total loss of DLC1 caused Drosophila embryonic death. Despite many proteins and RNAs interaction with it identified, DLC1's function(s) and regulation are largely unknown. Recently, DLC1 was identified as a physiological substrate of P21-activate kinase 1(Pak1) kinase from a human mammary cDNA library in a yeast-2-hybridization screening assay. Studies in primary human tumors and cell culture implicated that DLC1 could promote mammary cancerous phenotypes, and more importantly, Ser88 phosphorylation of DLC1by Pak1 kinase was found to be essential for DLC1's tumorigenic activities. Based on the above tissue culture studies, we hypothesized that Ser88 phosphorylation regulates DLC1. ^ To test this hypothesis, we generated two transgenic mouse models: MMTV-DLC1 and MMTV-DLC1-S88A mice with mammary specific expression of the DLC1 and DLC1-S88A cDNAs. Both of the transgenic mice mammary glands showed rare tumor incidence which indicated DLC1 alone may not be sufficient for tumorigenesis in vivo. However, these mice showed a significant alteration of mammary development. Mammary glands from the MMTV-DLC1 mice had hyperbranching and alveolar hyperplasia, with elevated cell proliferation. Intriguingly, these phenotypes were not seen in the mammary glands from the MMTV-S88A mice. Furthermore, while MMTV-DLC1 glands were normal during involution, MMTV-S88A mice showed accelerated mammary involution with increase apoptosis and altered expression of involution-associated genes. Further analysis of the MMTV-S88A glands showed they had increased steady state level of Bim protein which might be responsible for the early involution. Finally, our in vitro data showed that Ser88 phosphorylation abolished DLC1 dimer and consequently might disturb its interaction with Bim and destabilize Bim. ^ Collectively, our findings provided in vivo evidence that Ser88 phosphorylation of DLC1 can regulate DLC1's function. In addition, Ser88 phosphorylation might be critical for DLC1 dimer-monomer transition. ^
Resumo:
The standard analyses of survival data involve the assumption that survival and censoring are independent. When censoring and survival are related, the phenomenon is known as informative censoring. This paper examines the effects of an informative censoring assumption on the hazard function and the estimated hazard ratio provided by the Cox model.^ The limiting factor in all analyses of informative censoring is the problem of non-identifiability. Non-identifiability implies that it is impossible to distinguish a situation in which censoring and death are independent from one in which there is dependence. However, it is possible that informative censoring occurs. Examination of the literature indicates how others have approached the problem and covers the relevant theoretical background.^ Three models are examined in detail. The first model uses conditionally independent marginal hazards to obtain the unconditional survival function and hazards. The second model is based on the Gumbel Type A method for combining independent marginal distributions into bivariate distributions using a dependency parameter. Finally, a formulation based on a compartmental model is presented and its results described. For the latter two approaches, the resulting hazard is used in the Cox model in a simulation study.^ The unconditional survival distribution formed from the first model involves dependency, but the crude hazard resulting from this unconditional distribution is identical to the marginal hazard, and inferences based on the hazard are valid. The hazard ratios formed from two distributions following the Gumbel Type A model are biased by a factor dependent on the amount of censoring in the two populations and the strength of the dependency of death and censoring in the two populations. The Cox model estimates this biased hazard ratio. In general, the hazard resulting from the compartmental model is not constant, even if the individual marginal hazards are constant, unless censoring is non-informative. The hazard ratio tends to a specific limit.^ Methods of evaluating situations in which informative censoring is present are described, and the relative utility of the three models examined is discussed. ^
Resumo:
The electroencephalogram (EEG) is a physiological time series that measures electrical activity at different locations in the brain, and plays an important role in epilepsy research. Exploring the variance and/or volatility may yield insights for seizure prediction, seizure detection and seizure propagation/dynamics.^ Maximal Overlap Discrete Wavelet Transforms (MODWTs) and ARMA-GARCH models were used to determine variance and volatility characteristics of 66 channels for different states of an epileptic EEG – sleep, awake, sleep-to-awake and seizure. The wavelet variances, changes in wavelet variances and volatility half-lives for the four states were compared for possible differences between seizure and non-seizure channels.^ The half-lives of two of the three seizure channels were found to be shorter than all of the non-seizure channels, based on 95% CIs for the pre-seizure and awake signals. No discernible patterns were found the wavelet variances of the change points for the different signals. ^
Resumo:
Prevalent sampling is an efficient and focused approach to the study of the natural history of disease. Right-censored time-to-event data observed from prospective prevalent cohort studies are often subject to left-truncated sampling. Left-truncated samples are not randomly selected from the population of interest and have a selection bias. Extensive studies have focused on estimating the unbiased distribution given left-truncated samples. However, in many applications, the exact date of disease onset was not observed. For example, in an HIV infection study, the exact HIV infection time is not observable. However, it is known that the HIV infection date occurred between two observable dates. Meeting these challenges motivated our study. We propose parametric models to estimate the unbiased distribution of left-truncated, right-censored time-to-event data with uncertain onset times. We first consider data from a length-biased sampling, a specific case in left-truncated samplings. Then we extend the proposed method to general left-truncated sampling. With a parametric model, we construct the full likelihood, given a biased sample with unobservable onset of disease. The parameters are estimated through the maximization of the constructed likelihood by adjusting the selection bias and unobservable exact onset. Simulations are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to an HIV infection study, estimating the unbiased survival function and covariance coefficients. ^
Resumo:
Missense mutations in the p53 tumor-suppressor gene are the most common alterations of p53 in somatic tumors and in patients with Li-Fraumeni syndrome. p53 missense mutations occur in the DNA binding region and disrupt the ability of p53 to activate transcription. In vitro studies have shown that some p53 missense mutants have a gain-of-function or dominant-negative activity. ^ The p53 175 Arg-to-His (p53 R175H) mutation in humans has been shown to have dominant-negative and gain-of-function properties in vitro. This mutation is observed in the germline of individuals with Li-Fraumeni syndrome. To accurately model Li-Fraumeni syndrome and to examine the mechanistic nature of a gain-of-function missense mutation on in vivo tumorigenesis, we generated and characterized a mouse with the corresponding mutation, p53 R172H. p53R172H homozygous and heterozygous mice developed similar tumor spectra and survival curves as p53 −/− and p53+/− mice, respectively. However, tumors in p53+/R172H mice metastasized to various organs with high frequency, suggesting a gain-of-function phenotype by p53R172H in vivo. Mouse embryonic fibroblasts (MEFs) from p53R172H mice also showed gain-of-function phenotypes in cell proliferation, DNA synthesis, and transformation potential, while cells from p53+/− and p53−/− mice did not. ^ To mechanistically characterize the gain-of-function phenotype of the p53R172H mutant, the role of p53 family members, p63 and p73, was analyzed. Disruption of p63 and p73 by siRNAs in p53 −/− MEFs increased transformation potential and reinitiated DNA synthesis to levels observed in p53R172H/R172H cells. Additionally, p63 and p73 were bound and functionally inactivated by p53R172H in metastatic p53 R172H tumor-derived cell lines, indicating a role for the p53 family members in the gain-of-function phenotype. This study provides in vivo evidence for the gain-of-function effect of p53 missense mutations and more accurately models the Li-Fraumeni syndrome. ^
Resumo:
The complement system functions as a major effector for both the innate and adaptive immune response. Activation of the complement cascade by either the classical, alternative, or lectin pathway promotes the proteolysis of C3 and C5 thereby generating C3a and C5a. Referred to as anaphylatoxins, the C3a and C5a peptides mediate biological effects upon binding to their respective receptors; C3a binds to the C3a receptor (C3aR) while C5a binds to the C5a receptor (C5aR, CD88). Both C3a and C5a are known for their broad proinflammatory effects. Elevated levels of both peptides have been isolated from patients with a variety of inflammatory diseases such as COPD, asthma, RA, SLE, and sepsis. Recent studies suggest that C5a is a critical component in the acquired neutrophil dysfunction, coagulopathy, and progressive multi-organ dysfunction characteristic of sepsis. The primary hypothesis of this dissertation was that preventing C3a-C3aR and C5a-C5aR mediated pro-inflammatory effects would improve survival in endotoxic, bacteremic and septic shock. To test this hypothesis, the murine C3aR and C5aR genes were disrupted. Following disruption of both the C3aR and C5aR genes, no abnormalities were identified other than the absence of their respective mRNA and protein. In models of both endotoxic and bacteremic shock, C3aR deficient mice suffered increased mortality when compared to their wild type littermates. C3aR deficient mice also had elevated circulating IL-1β levels. Using a model of sepsis, C3aR deficient mice had a higher circulating concentration of IL-6 and decreased peritoneal inflammatory infiltration. While these results were unexpected, they support an emerging role for C3a in immunomodulation. In contrast, following endotoxic or bacteremic shock, C5aR deficient mice experienced increased survival, less hemoconcentration and less thrombocytopenia. It was later determined that C5a mediated histamine release significantly contributes to host morbidity and mortality in bacteremic shock. These studies provide evidence that C5a functions primarily as a proinflammatory molecule in models of endotoxic and bacteremic shock. In the same models, C3a-C3aR interactions suppress the inflammatory response and protect the host. Collectively, these results present in vivo evidence that C3a and C5a have divergent biological functions. ^
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
Sea surface temperatures and sea-ice extent are the most critical variables to evaluate the Southern Ocean paleoceanographic evolution in relation to the development of the global carbon cycle, atmospheric CO2 variability and ocean-atmosphere circulation. In contrast to the Atlantic and the Indian sectors, the Pacific sector of the Southern Ocean has been insufficiently investigated so far. To cover this gap of information we present diatom-based estimates of summer sea surface temperature (SSST) and winter sea-ice concentration (WSI) from 17 sites in the polar South Pacific to study the Last Glacial Maximum (LGM) at the EPILOG time slice (19,000-23,000 cal. years BP). Applied statistical methods are the Imbrie and Kipp Method (IKM) and the Modern Analog Technique (MAT) to estimate temperature and sea-ice concentration, respectively. Our data display a distinct LGM east-west differentiation in SSST and WSI with steeper latitudinal temperature gradients and a winter sea-ice edge located consistently north of the Pacific-Antarctic Ridge in the Ross sea sector. In the eastern sector of our study area, which is governed by the Amundsen Abyssal Plain, the estimates yield weaker latitudinal SSST gradients together with a variable extended winter sea-ice field. In this sector, sea-ice extent may have reached sporadically the area of the present Subantarctic Front at its maximum LGM expansion. This pattern points to topographic forcing as major controller of the frontal system location and sea-ice extent in the western Pacific sector whereas atmospheric conditions like the Southern Annular Mode and the ENSO affected the oceanographic conditions in the eastern Pacific sector. Although it is difficult to depict the location and the physical nature of frontal systems separating the glacial Southern Ocean water masses into different zones, we found a distinct temperature gradient in latitudes straddled by the modern Southern Subtropical Front. Considering that the glacial temperatures north of this zone are similar to the modern, we suggest that this represents the Glacial Southern Subtropical Front (GSSTF), which delimits the zone of strongest glacial SSST cooling (>4K) to its North. The southern boundary of the zone of maximum cooling is close to the glacial 4°C isotherm. This isotherm, which is in the range of SSST at the modern Antarctic Polar Front (APF), represents a circum-Antarctic feature and marks the northern edge of the glacial Antarctic Circumpolar Current (ACC). We also assume that a glacial front was established at the northern average winter sea ice edge, comparable with the modern Southern Antarctic Circumpolar Current Front (SACCF). During the glacial, this front would be located in the area of the modern APF. The northward deflection of colder than modern surface waters along the South American continent leads to a significant cooling of the glacial Humboldt Current surface waters (4-8K), which affects the temperature regimes as far north as into tropical latitudes. The glacial reduction of ACC temperatures may also result in the significant cooling in the Atlantic and Indian Southern Ocean, thus may enhance thermal differentiation of the Southern Ocean and Antarctic continental cooling. Comparison with temperature and sea ice simulations for the last glacial based on numerical simulations show that the majority of modern models overestimate summer and winter sea ice cover and that there exists few models that reproduce our temperature data rather well.
Resumo:
Ozone stomatal fluxes were modeled for a 3-year period following different approaches for a commercial variety of durum wheat (Triticum durum Desf. cv. Camacho) at the phenological stage of anthesis. All models performed in the same range, although not all of them afforded equally significant results. Nevertheless, all of them suggest that stomatal conductance would account for the main percentage of ozone deposition fluxes. A new modeling approach was tested, based on a 3-D architectural model of the wheat canopy, and fairly accurate results were obtained. Plant species-specific measurements, as well as measurements of stomatal conductance and environmental parameters, were required. The method proposed for calculating ozone stomatal fluxes (FO(3_3-D)) from experimental gs data and modeling them as a function of certain environmental parameters in conjunction with the use of the YPLANT model seems to be adequate, providing realistic estimates of the canopy FO(3_3-D), integrating and not neglecting the contribution of the lower leaves with respect to the flag leaf, although a further development of this model is needed.
Resumo:
Independent Components Analysis is a Blind Source Separation method that aims to find the pure source signals mixed together in unknown proportions in the observed signals under study. It does this by searching for factors which are mutually statistically independent. It can thus be classified among the latent-variable based methods. Like other methods based on latent variables, a careful investigation has to be carried out to find out which factors are significant and which are not. Therefore, it is important to dispose of a validation procedure to decide on the optimal number of independent components to include in the final model. This can be made complicated by the fact that two consecutive models may differ in the order and signs of similarly-indexed ICs. As well, the structure of the extracted sources can change as a function of the number of factors calculated. Two methods for determining the optimal number of ICs are proposed in this article and applied to simulated and real datasets to demonstrate their performance.