944 resultados para Variable pricing model
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Analysis of the equity premium puzzle has focused on private sector capital markets. The object of this paper is to consider the welfare and policy implications of each of the broad classes of explanations of the equity premium puzzle. As would be expected, the greater the deviation from the first-best outcome implied by a given explanation of the equity premium puzzle, the more interventionist are the implied policy conclusions. Nevertheless, even explanations of the equity premium puzzle consistent with a general consumption-based asset pricing model have important welfare and policy implications.
Resumo:
As ações de maior liquidez do índice IBOVESPA, refletem o comportamento das ações de um modo geral, bem como a relação das variáveis macroeconômicas em seu comportamento e estão entre as mais negociadas no mercado de capitais brasileiro. Desta forma, pode-se entender que há reflexos de fatores que impactam as empresas de maior liquidez que definem o comportamento das variáveis macroeconômicas e que o inverso também é uma verdade, oscilações nos fatores macroeconômicos também afetam as ações de maior liquidez, como IPCA, PIB, SELIC e Taxa de Câmbio. O estudo propõe uma análise da relação existente entre variáveis macroeconômicas e o comportamento das ações de maior liquidez do índice IBOVESPA, corroborando com estudos que buscam entender a influência de fatores macroeconômicos sobre o preço de ações e contribuindo empiricamente com a formação de portfólios de investimento. O trabalho abrangeu o período de 2008 a 2014. Os resultados concluíram que a formação de carteiras, visando a proteção do capital investido, deve conter ativos com correlação negativa em relação às variáveis estudadas, o que torna possível a composição de uma carteira com risco reduzido.
Resumo:
This study examined the predictive merits of selected cognitive and noncognitive variables on the national Registry exam pass rate using 2008 graduates (n = 175) from community college radiography programs in Florida. The independent variables included two GPAs, final grades in five radiography courses, self-efficacy, and social support. The dependent variable was the first-attempt results on the national Registry exam. The design was a retrospective predictive study that relied on academic data collected from participants using the self-report method and on perceptions of students' success on the national Registry exam collected through a questionnaire developed and piloted in the study. All independent variables except self-efficacy and social support correlated with success on the national Registry exam ( p < .01) using the Pearson Product-Moment Correlation analysis. The strongest predictor of the national Registry exam success was the end-of-program GPA, r = .550, p < .001. The GPAs and scores for self-efficacy and social support were entered into a logistic regression analysis to produce a prediction model. The end-of-program GPA (p = .015) emerged as a significant variable. This model predicted 44% of the students who failed the national Registry exam and 97.3% of those who passed, explaining 45.8% of the variance. A second model included the final grades for the radiography courses, self efficacy, and social support. Three courses significantly predicted national Registry exam success; Radiographic Exposures, p < .001; Radiologic Physics, p = .014; and Radiation Safety & Protection, p = .044, explaining 56.8% of the variance. This model predicted 64% of the students who failed the national Registry exam and 96% of those who passed. The findings support the use of in-program data as accurate predictors of success on the national Registry exam.
Resumo:
My dissertation investigates the financial linkages and transmission of economic shocks between the US and the smallest emerging markets (frontier markets). The first chapter sets up an empirical model that examines the impact of US market returns and conditional volatility on the returns and conditional volatilities of twenty-one frontier markets. The model is estimated via maximum likelihood; utilizes the GARCH model of errors, and is applied to daily country data from the MSCI Barra. We find limited, but statistically significant exposure of Frontier markets to shocks from the US. Our results suggest that it is not the lagged US market returns that have impact; rather it is the expected US market returns that influence frontier market returns The second chapter sets up an empirical time-varying parameter (TVP) model to explore the time-variation in the impact of mean US returns on mean Frontier market returns. The model utilizes the Kalman filter algorithm as well as the GARCH model of errors and is applied to daily country data from the MSCI Barra. The TVP model detects statistically significant time-variation in the impact of US returns and low, but statistically and quantitatively important impact of US market conditional volatility. The third chapter studies the risk-return relationship in twenty Frontier country stock markets by setting up an international version of the intertemporal capital asset pricing model. The systematic risk in this model comes from covariance of Frontier market stock index returns with world returns. Both the systematic risk and risk premium are time-varying in our model. We also incorporate own country variances as additional determinants of Frontier country returns. Our results suggest statistically significant impact of both world and own country risk in explaining Frontier country returns. Time-variation in the world risk premium is also found to be statistically significant for most Frontier market returns. However, own country risk is found to be quantitatively more important.
Resumo:
For the last three decades, the Capital Asset Pricing Model (CAPM) has been a dominant model to calculate expected return. In early 1990% Fama and French (1992) developed the Fama and French Three Factor model by adding two additional factors to the CAPM. However even with these present models, it has been found that estimates of the expected return are not accurate (Elton, 1999; Fama &French, 1997). Botosan (1997) introduced a new approach to estimate the expected return. This approach employs an equity valuation model to calculate the internal rate of return (IRR) which is often called, 'implied cost of equity capital" as a proxy of the expected return. This approach has been gaining in popularity among researchers. A critical review of the literature will help inform hospitality researchers regarding the issue and encourage them to implement the new approach into their own studies.
Resumo:
In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.
Resumo:
Synchronous machines, widely used in energy generation systems, require constant voltage and frequency to obtain good quality of energy. However, for large load variati- ons, it is difficult to maintain outputs on nominal values due to parametric uncertainties, nonlinearities and coupling among variables. Then, we propose to apply the Dual Mode Adaptive Robust Controller (DMARC) in the field flux control loop, replacing the tradi- tional PI controller. The DMARC links a Model Reference Adaptive Controller (MRAC) and a Variable Structure Model Reference Adaptive Controller (VS-MRAC), incorpora- ting transient performance advantages from VS-MRAC and steady state properties from MRAC. Moreover, simulation results are included to corroborate the theoretical studies.
Resumo:
This dissertation extends the empirical industrial organization literature with two essays on strategic decisions of firms in imperfectly competitive markets and one essay on how inertia in consumer choice can result in significant welfare losses. Using data from the airline industry I study a well-known puzzle in the literature whereby incumbent firms decrease fares when Southwest Airlines emerges as a potential entrant, but is not (yet) competing directly. In the first essay I describe this so-called Southwest Effect and use reduced-form analysis to offer possible explanations for why firms may choose to forgo profits today rather than wait until Southwest operates the route. The analysis suggests that incumbent firms are attempting to signal to Southwest that entry is unprofitable so as to deter its entry. The second essay develops this theme by extending a classic model from the IO literature, limit pricing, to a dynamic setting. Calibrations indicate the price cuts observed in the data can be captured by a dynamic limit pricing model. The third essay looks at another concentrated industry, mobile telecoms, and studies how inertia in choice (be it inattention or switching costs) can lead to consumers being on poorly matched cellphone plans and how a simple policy proposal can have a considerable effect on welfare.
Resumo:
Ce mémoire présente une version dividende du Capital Asset Pricing Model (CAPM). Selon le modèle développé ici, il existe une relation à l'équilibre entre le rendement en dividendes et le risque systématique. Cette relation est linéaire et négative et peut-être dérivée dans un monde avec ou sans impôt. Une application de ce modèle est possible lorsqu'on évalue la valeur théorique d'une action ordinaire à l'aide du taux net d'actualisation. Au total, le test empirique indique qu'il y a une concordance observable entre les implications majeures du modèle et les faits.
THE COSTS OF RAISING EQUITY RATIO FOR BANKS Evidence from publicly listed banks operating in Finland
Resumo:
The solvency rate of banks differs from the other corporations. The equity rate of a bank is lower than it is in corporations of other field of business. However, functional banking industry has huge impact on the whole society. The equity rate of a bank needs to be higher because that makes the banking industry more stable as the probability of the banks going under will decrease. If a bank goes belly up, the government will be compensating the deposits since it has granted the bank’s depositors a deposit insurance. This means that the payment comes from the tax payers in the last resort. Economic conversation has long concentrated on the costs of raising equity ratio. It has been a common belief that raising equity ratio also increases the banks’ funding costs in the same phase and these costs will be redistributed to the banks customers as higher service charges. Regardless of the common belief, the actual reaction of the funding costs to the higher equity ratio has been studied only a little in Europe and no study has been constructed in Finland. Before it can be calculated whether the higher stability of the banking industry that is caused by the raise in equity levels compensates the extra costs in funding costs, it must be calculated how much the actual increase in the funding costs is. Currently the banking industry is controlled by complex and heavy regulation. To maintain such a complex system inflicts major costs in itself. This research leans on the Modigliani and Miller theory, which shows that the finance structure of a firm is irrelevant to their funding costs. In addition, this research follows the calculations of Miller, Yang ja Marcheggianon (2012) and Vale (2011) where they calculate the funding costs after the doubling of specific banks’ equity ratios. The Finnish banks studied in this research are Nordea and Danske Bank because they are the two largest banks operating in Finland and they both also have the right company form to able the calculations. To calculate the costs of halving their leverages this study used the Capital Asset Pricing Model. The halving of the leverage of Danske Bank raised its funding costs for 16—257 basis points depending on the method of assessment. For Nordea the increase in funding costs was 11—186 basis points when its leverage was halved. On the behalf of the results found in this study it can be said that the doubling of an equity ratio does not increase the funding costs of a bank one by one. Actually the increase is quite modest. More solvent banks would increase the stability of the banking industry enormously while the increase in funding costs is low. If the costs of bank regulation exceeds the increase in funding costs after the higher equity ratio, it can be thought that this is the better way of stabilizing the banking industry rather than heavy regulation.
THE COSTS OF RAISING EQUITY RATIO FOR BANKS Evidence from publicly listed banks operating in Finland
Resumo:
The solvency rate of banks differs from the other corporations. The equity rate of a bank is lower than it is in corporations of other field of business. However, functional banking industry has huge impact on the whole society. The equity rate of a bank needs to be higher because that makes the banking industry more stable as the probability of the banks going under will decrease. If a bank goes belly up, the government will be compensating the deposits since it has granted the bank’s depositors a deposit insurance. This means that the payment comes from the tax payers in the last resort. Economic conversation has long concentrated on the costs of raising equity ratio. It has been a common belief that raising equity ratio also increases the banks’ funding costs in the same phase and these costs will be redistributed to the banks customers as higher service charges. Regardless of the common belief, the actual reaction of the funding costs to the higher equity ratio has been studied only a little in Europe and no study has been constructed in Finland. Before it can be calculated whether the higher stability of the banking industry that is caused by the raise in equity levels compensates the extra costs in funding costs, it must be calculated how much the actual increase in the funding costs is. Currently the banking industry is controlled by complex and heavy regulation. To maintain such a complex system inflicts major costs in itself. This research leans on the Modigliani and Miller theory, which shows that the finance structure of a firm is irrelevant to their funding costs. In addition, this research follows the calculations of Miller, Yang ja Marcheggianon (2012) and Vale (2011) where they calculate the funding costs after the doubling of specific banks’ equity ratios. The Finnish banks studied in this research are Nordea and Danske Bank because they are the two largest banks operating in Finland and they both also have the right company form to able the calculations. To calculate the costs of halving their leverages this study used the Capital Asset Pricing Model. The halving of the leverage of Danske Bank raised its funding costs for 16—257 basis points depending on the method of assessment. For Nordea the increase in funding costs was 11—186 basis points when its leverage was halved. On the behalf of the results found in this study it can be said that the doubling of an equity ratio does not increase the funding costs of a bank one by one. Actually the increase is quite modest. More solvent banks would increase the stability of the banking industry enormously while the increase in funding costs is low. If the costs of bank regulation exceeds the increase in funding costs after the higher equity ratio, it can be thought that this is the better way of stabilizing the banking industry rather than heavy regulation.
Resumo:
Investors value the special attributes of monetary assets (e.g., exchangeability, liquidity, and safety) and pay a premium for holding them in the form of a lower return rate -- The user cost of holding monetary assets can be measured approximately by the difference between the returns on illiquid risky assets and those of safer liquid assets -- A more appropriate measure should adjust this difference by the differential risk of the assets in question -- We investigate the impact that time non-separable preferences has on the estimation of the risk-adjusted user cost of money -- Using U.K. data from 1965Q1 to 2011Q1, we estimate a habit-based asset pricing model with money in the utility function and find that the risk adjustment for risky monetary assets is negligible -- Thus, researchers can dispense with risk adjusting the user cost of money in constructing monetary aggregate indexes
Resumo:
Dissertação de mest. em Ciências Económicas e Empresariais, Unidade de Ciências Económicas e Empresariais, Univ. do Algarve, 1996
Resumo:
To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.