944 resultados para Variable pricing model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In examining bank cost efficiency in banking inclusion of risk-taking of banks is very important. In this paper we depart from the standard modeling approach and view risk intimately related to the technology. Thus, instead of controlling for risk by viewing them as covariates in the standard cost function we argue that the technology differs with risk, thereby meaning that the parameters of the parametric cost function changes with risk in a fully flexible manner. This is accomplished by viewing the parameters of the cost function as nonparametric functions of risk. We also control for country-specific effects in a fully flexible manner by using them as arguments of the nonparametric functions along with the risk variable. The resulting cost function then becomes semiparametric. The standard parametric model becomes a special case of our semiparametric model. We use the above modeling approach for banks in the EU countries. Actually, European financial integration is seen as a stepping stone for the development of a competitive single EU market that promotes efficiency and increases consumer welfare, changing the risk profile of the European banks. Particularly, financial integration allows more risk diversification and permits banks to use more advanced risk management instruments and systems, however it has at the same time increased the probability of systematic risks. Financial integration has increased the risk of contagion and changed its nature and scope. Consequently the bank’s risk seems to be an important issue to be investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a semiparametric smooth-coefficient stochastic production frontier model where all the coefficients are expressed as some unknown functions of environmental factors. The inefficiency term is multiplicatively decomposed into a scaling function of the environmental factors and a standard truncated normal random variable. A testing procedure is suggested for the relevance of the environmental factors. Monte Carlo study shows plausible ¯nite sample behavior of our proposed estimation and inference procedure. An empirical example is given, where both the semiparametric and standard parametric models are estimated and results are compared.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker's value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data. © 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Projection of a high-dimensional dataset onto a two-dimensional space is a useful tool to visualise structures and relationships in the dataset. However, a single two-dimensional visualisation may not display all the intrinsic structure. Therefore, hierarchical/multi-level visualisation methods have been used to extract more detailed understanding of the data. Here we propose a multi-level Gaussian process latent variable model (MLGPLVM). MLGPLVM works by segmenting data (with e.g. K-means, Gaussian mixture model or interactive clustering) in the visualisation space and then fitting a visualisation model to each subset. To measure the quality of multi-level visualisation (with respect to parent and child models), metrics such as trustworthiness, continuity, mean relative rank errors, visualisation distance distortion and the negative log-likelihood per point are used. We evaluate the MLGPLVM approach on the ‘Oil Flow’ dataset and a dataset of protein electrostatic potentials for the ‘Major Histocompatibility Complex (MHC) class I’ of humans. In both cases, visual observation and the quantitative quality measures have shown better visualisation at lower levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a causal explanation of formative variables that unpacks and clarifies the generally accepted idea that formative indicators are ‘causes’ of the focal formative variable. In doing this, we explore the recent paper by Diamantopoulos and Temme (AMS Review, 3(3), 160-171, 2013) and show that the latter misunderstand the stance of Lee, Cadogan, and Chamberlain (AMS Review, 3(1), 3-17, 2013; see also Cadogan, Lee, and Chamberlain, AMS Review, 3(1), 38-49, 2013). By drawing on the multiple ways that one can interpret the idea of causality within the MIMIC model, we then demonstrate how the continued defense of the MIMIC model as a tool to validate formative indicators and to identify formative variables in structural models is misguided. We also present unambiguous recommendations on how formative variables can be modelled in lieu of the formative MIMIC model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quantitative analysis of receptor-mediated effect is based on experimental concentration-response data in which the independent variable, the concentration of a receptor ligand, is linked with a dependent variable, the biological response. The steps between the drug–receptor interaction and the subsequent biological effect are to some extent unknown. The shape of the fitting curve of the experimental data may give some in-sights into the nature of the concentration–receptor–response (C-R-R) mechanism. It can be evaluated by non-linear regression analysis of the experimental data points of the independent and dependent variables, which could be considered as a history of the interaction between the drug and receptors. However, this information is not enough to evaluate such important parameters of the mechanism as the dissociation constant (affinity) and efficacy. There are two ways to provide more detailed information about the C-R-R mechanism: (i) an experimental way for obtaining data with new or

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use the GN-model to assess Nyquist-WDM 100/200Gbit/s PM-QPSK/16QAM signal reach on low loss, large core area fibre using extended range, variable gain hybrid Raman-EDFAs. 5000/1500km transmission is possible over a wide range of amplifier spans. © OSA 2014.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80, 62P05.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

JEL Classification: G21, L13.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the sharing of value in business transactions. Although there is an increased usage of the terminology of value in marketing (such concepts as value based selling and pricing), as well as in purchasing (value-based purchasing), the definition of the term is still vague. In order to better understand the definition of value, the author’s argue that it is important to understand the sharing of value, in general and the element of power for the sharing of value in particular. The aim of this paper is to add to this debate and this requires us to critique the current models. The key process that the analysis of power will help to explain is the division of the available revenue stream flowing up the chain from the buyer's customers. If the buyer and supplier do not cooperate, then power will be key in the sharing of that money flow. If buyers and suppliers fully cooperate, they may be able to reduce their costs and/or increase the quality of the sales offering the buyer makes to their customer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of the multiple indicators, multiple causes model to operationalize formative variables (the formative MIMIC model) is advocated in the methodological literature. Yet, contrary to popular belief, the formative MIMIC model does not provide a valid method of integrating formative variables into empirical studies and we recommend discarding it from formative models. Our arguments rest on the following observations. First, much formative variable literature appears to conceptualize a causal structure between the formative variable and its indicators which can be tested or estimated. We demonstrate that this assumption is illogical, that a formative variable is simply a researcher-defined composite of sub-dimensions, and that such tests and estimates are unnecessary. Second, despite this, researchers often use the formative MIMIC model as a means to include formative variables in their models and to estimate the magnitude of linkages between formative variables and their indicators. However, the formative MIMIC model cannot provide this information since it is simply a model in which a common factor is predicted by some exogenous variables—the model does not integrate within it a formative variable. Empirical results from such studies need reassessing, since their interpretation may lead to inaccurate theoretical insights and the development of untested recommendations to managers. Finally, the use of the formative MIMIC model can foster fuzzy conceptualizations of variables, particularly since it can erroneously encourage the view that a single focal variable is measured with formative and reflective indicators. We explain these interlinked arguments in more detail and provide a set of recommendations for researchers to consider when dealing with formative variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research examines evolving issues in applied computer science and applies economic and business analyses as well. There are two main areas. The first is internetwork communications as embodied by the Internet. The goal of the research is to devise an efficient pricing, prioritization, and incentivization plan that could be realistically implemented on the existing infrastructure. Criteria include practical and economic efficiency, and proper incentives for both users and providers. Background information on the evolution and functional operation of the Internet is given, and relevant literature is surveyed and analyzed. Economic analysis is performed on the incentive implications of the current pricing structure and organization. The problems are identified, and minimally disruptive solutions are proposed for all levels of implementation to the lowest level protocol. Practical issues are considered and performance analyses are done. The second area of research is mass market software engineering, and how this differs from classical software engineering. Software life-cycle revenues are analyzed and software pricing and timing implications are derived. A profit maximizing methodology is developed to select or defer the development of software features for inclusion in a given release. An iterative model of the stages of the software development process is developed, taking into account new communications capabilities as well as profitability. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquidity is an important attribute of an asset that investors would like to take into consideration when making investment decisions. However, the previous empirical evidence whether liquidity is a determinant of stock return is not unanimous. This dissertation provides a very comprehensive study about the role of liquidity in asset pricing using the Fama-French (1993) three-factor and Kraus and Litzenberger (1976) three-moment CAPM as models for risk adjustment. The relationship between liquidity and well-known determinants of stock returns such as size and book-to-market are also investigated. This study examines the liquidity and asset pricing issues for both intertemporal as well as cross-sectional data. ^ The results indicate an existence of a liquidity premium, i.e., less liquid stocks would demand higher rate of return than more liquid stocks. More specifically, a drop of 1 percent in liquidity is associated with a higher rate of return of about 2 to 3 basis points per month. Further investigation reveals that neither the Fama-French three-factor model nor the three-moment CAPM captures the liquidity premium. Finally, the results show that well-known determinants of stock return such as size and book-to-market do not serve as proxy for liquidity. ^ Overall, this dissertation shows that a liquidity premium exists in the stock market and that liquidity is a distinct effect, and is not influenced by the presence of non-market factors, market factors and other stock characteristics.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lvy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Road pricing has emerged as an effective means of managing road traffic demand while simultaneously raising additional revenues to transportation agencies. Research on the factors that govern travel decisions has shown that user preferences may be a function of the demographic characteristics of the individuals and the perceived trip attributes. However, it is not clear what are the actual trip attributes considered in the travel decision- making process, how these attributes are perceived by travelers, and how the set of trip attributes change as a function of the time of the day or from day to day. In this study, operational Intelligent Transportation Systems (ITS) archives are mined and the aggregated preferences for a priced system are extracted at a fine time aggregation level for an extended number of days. The resulting information is related to corresponding time-varying trip attributes such as travel time, travel time reliability, charged toll, and other parameters. The time-varying user preferences and trip attributes are linked together by means of a binary choice model (Logit) with a linear utility function on trip attributes. The trip attributes weights in the utility function are then dynamically estimated for each time of day by means of an adaptive, limited-memory discrete Kalman filter (ALMF). The relationship between traveler choices and travel time is assessed using different rules to capture the logic that best represents the traveler perception and the effect of the real-time information on the observed preferences. The impact of travel time reliability on traveler choices is investigated considering its multiple definitions. It can be concluded based on the results that using the ALMF algorithm allows a robust estimation of time-varying weights in the utility function at fine time aggregation levels. The high correlations among the trip attributes severely constrain the simultaneous estimation of their weights in the utility function. Despite the data limitations, it is found that, the ALMF algorithm can provide stable estimates of the choice parameters for some periods of the day. Finally, it is found that the daily variation of the user sensitivities for different periods of the day resembles a well-defined normal distribution.