22 resultados para Variable pricing model
em Digital Commons at Florida International University
Resumo:
Groundwater systems of different densities are often mathematically modeled to understand and predict environmental behavior such as seawater intrusion or submarine groundwater discharge. Additional data collection may be justified if it will cost-effectively aid in reducing the uncertainty of a model's prediction. The collection of salinity, as well as, temperature data could aid in reducing predictive uncertainty in a variable-density model. However, before numerical models can be created, rigorous testing of the modeling code needs to be completed. This research documents the benchmark testing of a new modeling code, SEAWAT Version 4. The benchmark problems include various combinations of density-dependent flow resulting from variations in concentration and temperature. The verified code, SEAWAT, was then applied to two different hydrological analyses to explore the capacity of a variable-density model to guide data collection. ^ The first analysis tested a linear method to guide data collection by quantifying the contribution of different data types and locations toward reducing predictive uncertainty in a nonlinear variable-density flow and transport model. The relative contributions of temperature and concentration measurements, at different locations within a simulated carbonate platform, for predicting movement of the saltwater interface were assessed. Results from the method showed that concentration data had greater worth than temperature data in reducing predictive uncertainty in this case. Results also indicated that a linear method could be used to quantify data worth in a nonlinear model. ^ The second hydrological analysis utilized a model to identify the transient response of the salinity, temperature, age, and amount of submarine groundwater discharge to changes in tidal ocean stage, seasonal temperature variations, and different types of geology. The model was compared to multiple kinds of data to (1) calibrate and verify the model, and (2) explore the potential for the model to be used to guide the collection of data using techniques such as electromagnetic resistivity, thermal imagery, and seepage meters. Results indicated that the model can be used to give insight to submarine groundwater discharge and be used to guide data collection. ^
Resumo:
Personality has long been linked to performance. Evolutions in this relationship have brought forward new questions regarding the true nature of how personality impacts performance. Both direct and indirect relationships have been proven significant. This study further investigated potential indirect relationships by including a mediating variable, mental model formation, in the personality-performance relationship. Undergraduate students were assessed in a 6-week period, Time 1 - Time 2 experiment. Conceptualizations of personality included measures of the Big 5 model and Self-efficacy, with performance measured by content quiz and overall course scores. Findings showed that the Big 5 personality traits, extraversion and agreeableness, positively and significantly impacted commonality with the instructor's mental model. However, commonality with the instructor's mental model did not impact performance. In comparison, commonality with an expert mental model positively and significantly impacted performance for both the content quiz and overall course score. Furthermore, similarity with an expert mental model positively and significantly impacted overall course performance. Hypothesized full mediation of mental model formation for the personality-performance relationship was not supported due to a lack of direct effect relationships required for mediation. However, a revised conceptualization of results emerged. Findings from the current study point to the novel and unique role mental models play in the personality-performance relationship. While personality traits do impact mental model formation, accuracy in the mental models formed is critical to performance.
Resumo:
The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^
Resumo:
Personality has long been linked to performance. Evolutions in this relationship have brought forward new questions regarding the true nature of how personality impacts performance. Both direct and indirect relationships have been proven significant. This study further investigated potential indirect relationships by including a mediating variable, mental model formation, in the personality-performance relationship. Undergraduate students were assessed in a 6-week period, Time 1 - Time 2 experiment. Conceptualizations of personality included measures of the Big 5 model and Self-efficacy, with performance measured by content quiz and overall course scores. Findings showed that the Big 5 personality traits, extraversion and agreeableness, positively and significantly impacted commonality with the instructor’s mental model. However, commonality with the instructor’s mental model did not impact performance. In comparison, commonality with an expert mental model positively and significantly impacted performance for both the content quiz and overall course score. Furthermore, similarity with an expert mental model positively and significantly impacted overall course performance. Hypothesized full mediation of mental model formation for the personality-performance relationship was not supported due to a lack of direct effect relationships required for mediation. However, a revised conceptualization of results emerged. Findings from the current study point to the novel and unique role mental models play in the personality-performance relationship. While personality traits do impact mental model formation, accuracy in the mental models formed is critical to performance.
Resumo:
This study examined the predictive merits of selected cognitive and noncognitive variables on the national Registry exam pass rate using 2008 graduates (n = 175) from community college radiography programs in Florida. The independent variables included two GPAs, final grades in five radiography courses, self-efficacy, and social support. The dependent variable was the first-attempt results on the national Registry exam. The design was a retrospective predictive study that relied on academic data collected from participants using the self-report method and on perceptions of students' success on the national Registry exam collected through a questionnaire developed and piloted in the study. All independent variables except self-efficacy and social support correlated with success on the national Registry exam ( p < .01) using the Pearson Product-Moment Correlation analysis. The strongest predictor of the national Registry exam success was the end-of-program GPA, r = .550, p < .001. The GPAs and scores for self-efficacy and social support were entered into a logistic regression analysis to produce a prediction model. The end-of-program GPA (p = .015) emerged as a significant variable. This model predicted 44% of the students who failed the national Registry exam and 97.3% of those who passed, explaining 45.8% of the variance. A second model included the final grades for the radiography courses, self efficacy, and social support. Three courses significantly predicted national Registry exam success; Radiographic Exposures, p < .001; Radiologic Physics, p = .014; and Radiation Safety & Protection, p = .044, explaining 56.8% of the variance. This model predicted 64% of the students who failed the national Registry exam and 96% of those who passed. The findings support the use of in-program data as accurate predictors of success on the national Registry exam.
Resumo:
My dissertation investigates the financial linkages and transmission of economic shocks between the US and the smallest emerging markets (frontier markets). The first chapter sets up an empirical model that examines the impact of US market returns and conditional volatility on the returns and conditional volatilities of twenty-one frontier markets. The model is estimated via maximum likelihood; utilizes the GARCH model of errors, and is applied to daily country data from the MSCI Barra. We find limited, but statistically significant exposure of Frontier markets to shocks from the US. Our results suggest that it is not the lagged US market returns that have impact; rather it is the expected US market returns that influence frontier market returns The second chapter sets up an empirical time-varying parameter (TVP) model to explore the time-variation in the impact of mean US returns on mean Frontier market returns. The model utilizes the Kalman filter algorithm as well as the GARCH model of errors and is applied to daily country data from the MSCI Barra. The TVP model detects statistically significant time-variation in the impact of US returns and low, but statistically and quantitatively important impact of US market conditional volatility. The third chapter studies the risk-return relationship in twenty Frontier country stock markets by setting up an international version of the intertemporal capital asset pricing model. The systematic risk in this model comes from covariance of Frontier market stock index returns with world returns. Both the systematic risk and risk premium are time-varying in our model. We also incorporate own country variances as additional determinants of Frontier country returns. Our results suggest statistically significant impact of both world and own country risk in explaining Frontier country returns. Time-variation in the world risk premium is also found to be statistically significant for most Frontier market returns. However, own country risk is found to be quantitatively more important.
Resumo:
For the last three decades, the Capital Asset Pricing Model (CAPM) has been a dominant model to calculate expected return. In early 1990% Fama and French (1992) developed the Fama and French Three Factor model by adding two additional factors to the CAPM. However even with these present models, it has been found that estimates of the expected return are not accurate (Elton, 1999; Fama &French, 1997). Botosan (1997) introduced a new approach to estimate the expected return. This approach employs an equity valuation model to calculate the internal rate of return (IRR) which is often called, 'implied cost of equity capital" as a proxy of the expected return. This approach has been gaining in popularity among researchers. A critical review of the literature will help inform hospitality researchers regarding the issue and encourage them to implement the new approach into their own studies.
Resumo:
In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.
Resumo:
This research examines evolving issues in applied computer science and applies economic and business analyses as well. There are two main areas. The first is internetwork communications as embodied by the Internet. The goal of the research is to devise an efficient pricing, prioritization, and incentivization plan that could be realistically implemented on the existing infrastructure. Criteria include practical and economic efficiency, and proper incentives for both users and providers. Background information on the evolution and functional operation of the Internet is given, and relevant literature is surveyed and analyzed. Economic analysis is performed on the incentive implications of the current pricing structure and organization. The problems are identified, and minimally disruptive solutions are proposed for all levels of implementation to the lowest level protocol. Practical issues are considered and performance analyses are done. The second area of research is mass market software engineering, and how this differs from classical software engineering. Software life-cycle revenues are analyzed and software pricing and timing implications are derived. A profit maximizing methodology is developed to select or defer the development of software features for inclusion in a given release. An iterative model of the stages of the software development process is developed, taking into account new communications capabilities as well as profitability. ^
Resumo:
Liquidity is an important attribute of an asset that investors would like to take into consideration when making investment decisions. However, the previous empirical evidence whether liquidity is a determinant of stock return is not unanimous. This dissertation provides a very comprehensive study about the role of liquidity in asset pricing using the Fama-French (1993) three-factor and Kraus and Litzenberger (1976) three-moment CAPM as models for risk adjustment. The relationship between liquidity and well-known determinants of stock returns such as size and book-to-market are also investigated. This study examines the liquidity and asset pricing issues for both intertemporal as well as cross-sectional data. ^ The results indicate an existence of a liquidity premium, i.e., less liquid stocks would demand higher rate of return than more liquid stocks. More specifically, a drop of 1 percent in liquidity is associated with a higher rate of return of about 2 to 3 basis points per month. Further investigation reveals that neither the Fama-French three-factor model nor the three-moment CAPM captures the liquidity premium. Finally, the results show that well-known determinants of stock return such as size and book-to-market do not serve as proxy for liquidity. ^ Overall, this dissertation shows that a liquidity premium exists in the stock market and that liquidity is a distinct effect, and is not influenced by the presence of non-market factors, market factors and other stock characteristics.^
Resumo:
In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lvy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.
Resumo:
Road pricing has emerged as an effective means of managing road traffic demand while simultaneously raising additional revenues to transportation agencies. Research on the factors that govern travel decisions has shown that user preferences may be a function of the demographic characteristics of the individuals and the perceived trip attributes. However, it is not clear what are the actual trip attributes considered in the travel decision- making process, how these attributes are perceived by travelers, and how the set of trip attributes change as a function of the time of the day or from day to day. In this study, operational Intelligent Transportation Systems (ITS) archives are mined and the aggregated preferences for a priced system are extracted at a fine time aggregation level for an extended number of days. The resulting information is related to corresponding time-varying trip attributes such as travel time, travel time reliability, charged toll, and other parameters. The time-varying user preferences and trip attributes are linked together by means of a binary choice model (Logit) with a linear utility function on trip attributes. The trip attributes weights in the utility function are then dynamically estimated for each time of day by means of an adaptive, limited-memory discrete Kalman filter (ALMF). The relationship between traveler choices and travel time is assessed using different rules to capture the logic that best represents the traveler perception and the effect of the real-time information on the observed preferences. The impact of travel time reliability on traveler choices is investigated considering its multiple definitions. It can be concluded based on the results that using the ALMF algorithm allows a robust estimation of time-varying weights in the utility function at fine time aggregation levels. The high correlations among the trip attributes severely constrain the simultaneous estimation of their weights in the utility function. Despite the data limitations, it is found that, the ALMF algorithm can provide stable estimates of the choice parameters for some periods of the day. Finally, it is found that the daily variation of the user sensitivities for different periods of the day resembles a well-defined normal distribution.
Resumo:
Menu engineering is a methodology to classify menu items by their contribution margin and popularity. The process discounts the importance of food cost percentage, recognizing that operators deposit cash, not percentages. The authors raise the issue that strict application of the principles of menu engineering may result in an erroneous evaluation of a menu item, and also may be of little use without considering the variable portion of labor. They describe an enhancement to the process by considering labor.
Resumo:
Barry Reece and Rhonda Brandt use a human relations perspective to explain behavior at work. Following a review of the six components of their model, the author presents research to illustrate how it can be used by managers to help them understand why food safety violations occur in restaurants. An additional variable not included in the model is discusses and recommendations for managers are made.
Resumo:
A plethora of recent literature on asset pricing provides plenty of empirical evidence on the importance of liquidity, governance and adverse selection of equity on pricing of assets together with more traditional factors such as market beta and the Fama-French factors. However, literature has usually stressed that these factors are priced individually. In this dissertation we argue that these factors may be related to each other, hence not only individual but also joint tests of their significance is called for. ^ In the three related essays, we examine the liquidity premium in the context of the finer three-digit SIC industry classification, joint importance of liquidity and governance factors as well as governance and adverse selection. Recent studies by Core, Guay and Rusticus (2006) and Ben-Rephael, Kadan and Wohl (2010) find that governance and liquidity premiums are dwindling in the last few years. One reason could be that liquidity is very unevenly distributed across industries. This could affect the interpretation of prior liquidity studies. Thus, in the first chapter we analyze the relation of industry clustering and liquidity risk following a finer industry classification suggested by Johnson, Moorman and Sorescu (2009). In the second chapter, we examine the dwindling influence of the governance factor if taken simultaneously with liquidity. We argue that this happens since governance characteristics are potentially a proxy for information asymmetry that may be better captured by market liquidity of a company's shares. Hence, we jointly examine both the factors, namely, governance and liquidity - in a series of standard asset pricing tests. Our results reconfirm the importance of governance and liquidity in explaining stock returns thus independently corroborating the findings of Amihud (2002) and Gompers, Ishii and Metrick (2003). Moreover, governance is not subsumed by liquidity. Lastly, we analyze the relation of governance and adverse selection, and again corroborate previous findings of a priced governance factor. Furthermore, we ascertain the importance of microstructure measures in asset pricing by employing Huang and Stoll's (1997) method to extract an adverse selection variable and finding evidence for its explanatory power in four-factor regressions.^