942 resultados para equilibrium asset pricing models with latent variables
Resumo:
Celtis sinensis is an introduced plant species to the southeastern region of Queensland that has had a destructive affect on indigenous plant Communities and its pollen has been identified as an allergen Source. Pollen belonging to C. sinensis was sampled during a 5-year (June 1994-May 1999) atmospheric pollen-monitoring programme in Brisbane, Australia, using a Burkard 7-day spore trap. The seasonal incidence of airborne C. sinensis pollen (CsP) in Brisbane occurred over a brief period each year during spring (August-September), while peak concentrations were restricted to the beginning of September. individual CsP seasons were heterogeneous with daily counts within the range 1-10 grains m(-3) on no more than 60 sampling days; however, smaller airborne concentrations of CsP were recorded out of each season. Correlation co-efficients were significant each year for temperature (p0.05) and relative humidity (p>0.05). A significant relationship (r(2)=0.81, p=0.036) was established between the total CsP count and pre-seasonal average maximum temperature; however, periods of precipitation (>2mm) were demonstrated to significantly lower the daily concentrations of CsP from the atmosphere. Given the environmental and clinical significance of CsP and its prevalence in the atmosphere of Brisbane, a Clinical population-based Study is required to further understand the pollen's importance as a seasonal sensitizing source in this region.
Resumo:
In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.
Resumo:
Quantitatively predicting mass transport rates for chemical mixtures in porous materials is important in applications of materials such as adsorbents, membranes, and catalysts. Because directly assessing mixture transport experimentally is challenging, theoretical models that can predict mixture diffusion coefficients using Only single-component information would have many uses. One such model was proposed by Skoulidas, Sholl, and Krishna (Langmuir, 2003, 19, 7977), and applications of this model to a variety of chemical mixtures in nanoporous materials have yielded promising results. In this paper, the accuracy of this model for predicting mixture diffusion coefficients in materials that exhibit a heterogeneous distribution of local binding energies is examined. To examine this issue, single-component and binary mixture diffusion coefficients are computed using kinetic Monte Carlo for a two-dimensional lattice model over a wide range of lattice occupancies and compositions. The approach suggested by Skoulidas, Sholl, and Krishna is found to be accurate in situations where the spatial distribution of binding site energies is relatively homogeneous, but is considerably less accurate for strongly heterogeneous energy distributions.
Resumo:
Deformable models are an attractive approach to recognizing objects which have considerable within-class variability such as handwritten characters. However, there are severe search problems associated with fitting the models to data which could be reduced if a better starting point for the search were available. We show that by training a neural network to predict how a deformable model should be instantiated from an input image, such improved starting points can be obtained. This method has been implemented for a system that recognizes handwritten digits using deformable models, and the results show that the search time can be significantly reduced without compromising recognition performance. © 1997 Academic Press.
Resumo:
The Q parameter scales differently with the noise power for the signal-noise and the noise-noise beating terms in scalar and vector models. Some procedures for including noise in the scalar model largely under-estimate the Q parameter.
Resumo:
A study has been made of drugs acting at 5-HT receptors on animal models of anxiety. An elevated X-maze was used as a model of anxiety for rats and the actions of various ligands for the 5-HT receptor, and its subtypes, were examined in this model. 5-HT agonists, with varying affinities for the 5-HT receptor subtypes, were demonstrated to have anxiogenic-like activity. The 5-HT2 receptor antagonists ritanserin and ketanserin exhibited an anxiolytic-like profile. The new putatuve anxiolytics ipsapirone and buspirone, which are believed to be selective for 5-HT1 receptors, were also examined. The former had an anxiolytic profile whilst the latter was without effect. Antagonism studies showed the anxiogenic response to 8-hydroxy-2-(Di-n-propylamino)tetralin (8-OH-DPAT) to be antagonised by ipsapirone, pindolol, alprenolol and para-chlorophenylalanine, but not by diazepam, ritanserin, metoprolol, ICI118,551 or buspirone. To confirm some of the results obtained in the elevated X-maze the Social Interaction Test of anxiety was used. Results in this test mirrored the effects seen with the 5-HT agonists, ipsapirone and pindolol, whilst the 5-HT2 receptor antagonists were without effect. Studies using operant conflict models of anxiety produced marginal and varying results which appear to be in agreement with recent criticisms of such models. Finally, lesions of the dorsal raphe nucleus (DRN) were performed in order to investigate the mechanisms involved in the production of the anxiogenic response to 8-OH-DPAT. Overall the results lend support to the involvement of 5-HT, and more precisely 5-HT1, receptors in the manifestation of anxiety in such animal models.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.
Resumo:
Emrouznejad et al. (2010) proposed a Semi-Oriented Radial Measure (SORM) model for assessing the efficiency of Decision Making Units (DMUs) by Data Envelopment Analysis (DEA) with negative data. This paper provides a necessary and sufficient condition for boundedness of the input and output oriented SORM models.
Resumo:
In this article we evaluate the most widely used spread decomposition models using Exchange Traded Funds (ETFs). These funds are an example of a basket security and allow the diversification of private information causing these securities to have lower adverse selection costs than individual securities. We use this feature as a criterion for evaluating spread decomposition models. Comparisons of adverse selection costs for ETF's and control securities obtained from spread decomposition models show that only the Glosten-Harris (1988) and the Madhavan-Richardson-Roomans (1997) models provide estimates of the spread that are consistent with the diversification of private information in a basket security. Our results are robust even after controlling for the stock exchange. © 2011 Copyright Taylor and Francis Group, LLC.
Resumo:
A dolgozatban röviden bemutatjuk az eszközárazás második alaptételét. A bizonyítás során felhasználjuk a Dalang-Morton-Wilinger tétel bizonyításában használt állításokat. ______ In the article we summarize the results about the second fundamental theorem of asset pricing.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.