871 resultados para Variable pricing model
Resumo:
Road pricing has emerged as an effective means of managing road traffic demand while simultaneously raising additional revenues to transportation agencies. Research on the factors that govern travel decisions has shown that user preferences may be a function of the demographic characteristics of the individuals and the perceived trip attributes. However, it is not clear what are the actual trip attributes considered in the travel decision- making process, how these attributes are perceived by travelers, and how the set of trip attributes change as a function of the time of the day or from day to day. In this study, operational Intelligent Transportation Systems (ITS) archives are mined and the aggregated preferences for a priced system are extracted at a fine time aggregation level for an extended number of days. The resulting information is related to corresponding time-varying trip attributes such as travel time, travel time reliability, charged toll, and other parameters. The time-varying user preferences and trip attributes are linked together by means of a binary choice model (Logit) with a linear utility function on trip attributes. The trip attributes weights in the utility function are then dynamically estimated for each time of day by means of an adaptive, limited-memory discrete Kalman filter (ALMF). The relationship between traveler choices and travel time is assessed using different rules to capture the logic that best represents the traveler perception and the effect of the real-time information on the observed preferences. The impact of travel time reliability on traveler choices is investigated considering its multiple definitions. It can be concluded based on the results that using the ALMF algorithm allows a robust estimation of time-varying weights in the utility function at fine time aggregation levels. The high correlations among the trip attributes severely constrain the simultaneous estimation of their weights in the utility function. Despite the data limitations, it is found that, the ALMF algorithm can provide stable estimates of the choice parameters for some periods of the day. Finally, it is found that the daily variation of the user sensitivities for different periods of the day resembles a well-defined normal distribution.
Resumo:
Menu engineering is a methodology to classify menu items by their contribution margin and popularity. The process discounts the importance of food cost percentage, recognizing that operators deposit cash, not percentages. The authors raise the issue that strict application of the principles of menu engineering may result in an erroneous evaluation of a menu item, and also may be of little use without considering the variable portion of labor. They describe an enhancement to the process by considering labor.
Resumo:
Barry Reece and Rhonda Brandt use a human relations perspective to explain behavior at work. Following a review of the six components of their model, the author presents research to illustrate how it can be used by managers to help them understand why food safety violations occur in restaurants. An additional variable not included in the model is discusses and recommendations for managers are made.
Resumo:
A plethora of recent literature on asset pricing provides plenty of empirical evidence on the importance of liquidity, governance and adverse selection of equity on pricing of assets together with more traditional factors such as market beta and the Fama-French factors. However, literature has usually stressed that these factors are priced individually. In this dissertation we argue that these factors may be related to each other, hence not only individual but also joint tests of their significance is called for. ^ In the three related essays, we examine the liquidity premium in the context of the finer three-digit SIC industry classification, joint importance of liquidity and governance factors as well as governance and adverse selection. Recent studies by Core, Guay and Rusticus (2006) and Ben-Rephael, Kadan and Wohl (2010) find that governance and liquidity premiums are dwindling in the last few years. One reason could be that liquidity is very unevenly distributed across industries. This could affect the interpretation of prior liquidity studies. Thus, in the first chapter we analyze the relation of industry clustering and liquidity risk following a finer industry classification suggested by Johnson, Moorman and Sorescu (2009). In the second chapter, we examine the dwindling influence of the governance factor if taken simultaneously with liquidity. We argue that this happens since governance characteristics are potentially a proxy for information asymmetry that may be better captured by market liquidity of a company's shares. Hence, we jointly examine both the factors, namely, governance and liquidity - in a series of standard asset pricing tests. Our results reconfirm the importance of governance and liquidity in explaining stock returns thus independently corroborating the findings of Amihud (2002) and Gompers, Ishii and Metrick (2003). Moreover, governance is not subsumed by liquidity. Lastly, we analyze the relation of governance and adverse selection, and again corroborate previous findings of a priced governance factor. Furthermore, we ascertain the importance of microstructure measures in asset pricing by employing Huang and Stoll's (1997) method to extract an adverse selection variable and finding evidence for its explanatory power in four-factor regressions.^
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Variable Speed Limit (VSL) strategies identify and disseminate dynamic speed limits that are determined to be appropriate based on prevailing traffic conditions, road surface conditions, and weather conditions. This dissertation develops and evaluates a shockwave-based VSL system that uses a heuristic switching logic-based controller with specified thresholds of prevailing traffic flow conditions. The system aims to improve operations and mobility at critical bottlenecks. Before traffic breakdown occurrence, the proposed VSL’s goal is to prevent or postpone breakdown by decreasing the inflow and achieving uniform distribution in speed and flow. After breakdown occurrence, the VSL system aims to dampen traffic congestion by reducing the inflow traffic to the congested area and increasing the bottleneck capacity by deactivating the VSL at the head of the congested area. The shockwave-based VSL system pushes the VSL location upstream as the congested area propagates upstream. In addition to testing the system using infrastructure detector-based data, this dissertation investigates the use of Connected Vehicle trajectory data as input to the shockwave-based VSL system performance. Since the field Connected Vehicle data are not available, as part of this research, Vehicle-to-Infrastructure communication is modeled in the microscopic simulation to obtain individual vehicle trajectories. In this system, wavelet transform is used to analyze aggregated individual vehicles’ speed data to determine the locations of congestion. The currently recommended calibration procedures of simulation models are generally based on the capacity, volume and system-performance values and do not specifically examine traffic breakdown characteristics. However, since the proposed VSL strategies are countermeasures to the impacts of breakdown conditions, considering breakdown characteristics in the calibration procedure is important to have a reliable assessment. Several enhancements were proposed in this study to account for the breakdown characteristics at bottleneck locations in the calibration process. In this dissertation, performance of shockwave-based VSL is compared to VSL systems with different fixed VSL message sign locations utilizing the calibrated microscopic model. The results show that shockwave-based VSL outperforms fixed-location VSL systems, and it can considerably decrease the maximum back of queue and duration of breakdown while increasing the average speed during breakdown.
Resumo:
In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lévy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.
Resumo:
Recent discussion regarding whether the noise that limits 2AFC discrimination performance is fixed or variable has focused either on describing experimental methods that presumably dissociate the effects of response mean and variance or on reanalyzing a published data set with the aim of determining how to solve the question through goodness-of-fit statistics. This paper illustrates that the question cannot be solved by fitting models to data and assessing goodness-of-fit because data on detection and discrimination performance can be indistinguishably fitted by models that assume either type of noise when each is coupled with a convenient form for the transducer function. Thus, success or failure at fitting a transducer model merely illustrates the capability (or lack thereof) of some particular combination of transducer function and variance function to account for the data, but it cannot disclose the nature of the noise. We also comment on some of the issues that have been raised in recent exchange on the topic, namely, the existence of additional constraints for the models, the presence of asymmetric asymptotes, the likelihood of history-dependent noise, and the potential of certain experimental methods to dissociate the effects of response mean and variance.
Resumo:
The transducer function mu for contrast perception describes the nonlinear mapping of stimulus contrast onto an internal response. Under a signal detection theory approach, the transducer model of contrast perception states that the internal response elicited by a stimulus of contrast c is a random variable with mean mu(c). Using this approach, we derive the formal relations between the transducer function, the threshold-versus-contrast (TvC) function, and the psychometric functions for contrast detection and discrimination in 2AFC tasks. We show that the mathematical form of the TvC function is determined only by mu, and that the psychometric functions for detection and discrimination have a common mathematical form with common parameters emanating from, and only from, the transducer function mu and the form of the distribution of the internal responses. We discuss the theoretical and practical implications of these relations, which have bearings on the tenability of certain mathematical forms for the psychometric function and on the suitability of empirical approaches to model validation. We also present the results of a comprehensive test of these relations using two alternative forms of the transducer model: a three-parameter version that renders logistic psychometric functions and a five-parameter version using Foley's variant of the Naka-Rushton equation as transducer function. Our results support the validity of the formal relations implied by the general transducer model, and the two versions that were contrasted account for our data equally well.
Resumo:
We study theoretically the effect of a new type of blocklike positional disorder on the effective electromagnetic properties of one-dimensional chains of resonant, high-permittivity dielectric particles, where particles are arranged into perfectly well-ordered blocks whose relative position is a random variable. This creates a finite order correlation length that mimics the situation encountered in metamaterials fabricated through self-assembled techniques, whose structures often display short-range order between near neighbors but long-range disorder, due to stacking defects. Using a spectral theory approach combined with a principal component statistical analysis, we study, in the long-wavelength regime, the evolution of the electromagnetic response when the composite filling fraction and the block size are changed. Modifications in key features of the resonant response (amplitude, width, etc.) are investigated, showing a regime transition for a filling fraction around 50%.
Resumo:
The standard difference model of two-alternative forced-choice (2AFC) tasks implies that performance should be the same when the target is presented in the first or the second interval. Empirical data often show “interval bias” in that percentage correct differs significantly when the signal is presented in the first or the second interval. We present an extension of the standard difference model that accounts for interval bias by incorporating an indifference zone around the null value of the decision variable. Analytical predictions are derived which reveal how interval bias may occur when data generated by the guessing model are analyzed as prescribed by the standard difference model. Parameter estimation methods and goodness-of-fit testing approaches for the guessing model are also developed and presented. A simulation study is included whose results show that the parameters of the guessing model can be estimated accurately. Finally, the guessing model is tested empirically in a 2AFC detection procedure in which guesses were explicitly recorded. The results support the guessing model and indicate that interval bias is not observed when guesses are separated out.
Resumo:
This dissertation contributes to the rapidly growing empirical research area in the field of operations management. It contains two essays, tackling two different sets of operations management questions which are motivated by and built on field data sets from two very different industries --- air cargo logistics and retailing.
The first essay, based on the data set obtained from a world leading third-party logistics company, develops a novel and general Bayesian hierarchical learning framework for estimating customers' spillover learning, that is, customers' learning about the quality of a service (or product) from their previous experiences with similar yet not identical services. We then apply our model to the data set to study how customers' experiences from shipping on a particular route affect their future decisions about shipping not only on that route, but also on other routes serviced by the same logistics company. We find that customers indeed borrow experiences from similar but different services to update their quality beliefs that determine future purchase decisions. Also, service quality beliefs have a significant impact on their future purchasing decisions. Moreover, customers are risk averse; they are averse to not only experience variability but also belief uncertainty (i.e., customer's uncertainty about their beliefs). Finally, belief uncertainty affects customers' utilities more compared to experience variability.
The second essay is based on a data set obtained from a large Chinese supermarket chain, which contains sales as well as both wholesale and retail prices of un-packaged perishable vegetables. Recognizing the special characteristics of this particularly product category, we develop a structural estimation model in a discrete-continuous choice model framework. Building on this framework, we then study an optimization model for joint pricing and inventory management strategies of multiple products, which aims at improving the company's profit from direct sales and at the same time reducing food waste and thus improving social welfare.
Collectively, the studies in this dissertation provide useful modeling ideas, decision tools, insights, and guidance for firms to utilize vast sales and operations data to devise more effective business strategies.
Resumo:
The increasing nationwide interest in intelligent transportation systems (ITS) and the need for more efficient transportation have led to the expanding use of variable message sign (VMS) technology. VMS panels are substantially heavier than flat panel aluminum signs and have a larger depth (dimension parallel to the direction of traffic). The additional weight and depth can have a significant effect on the aerodynamic forces and inertial loads transmitted to the support structure. The wind induced drag forces and the response of VMS structures is not well understood. Minimum design requirements for VMS structures are contained in the American Association of State Highway Transportation Officials Standard Specification for Structural Support for Highway Signs, Luminaires, and Traffic Signals (AASHTO Specification). However the Specification does not take into account the prismatic geometry of VMS and the complex interaction of the applied aerodynamic forces to the support structure. In view of the lack of code guidance and the limited number research performed so far, targeted experimentation and large scale testing was conducted at the Florida International University (FIU) Wall of Wind (WOW) to provide reliable drag coefficients and investigate the aerodynamic instability of VMS. A comprehensive range of VMS geometries was tested in turbulence representative of the high frequency end of the spectrum in a simulated suburban atmospheric boundary layer. The mean normal, lateral and vertical lift force coefficients, in addition to the twisting moment coefficient and eccentricity ratio, were determined using the measured data for each model. Wind tunnel testing confirmed that drag on a prismatic VMS is smaller than the 1.7 suggested value in the current AASHTO Specification (2013). An alternative to the AASHTO Specification code value is presented in the form of a design matrix. Testing and analysis also indicated that vortex shedding oscillations and galloping instability could be significant for VMS signs with a large depth ratio attached to a structure with a low natural frequency. The effect of corner modification was investigated by testing models with chamfered and rounded corners. Results demonstrated an additional decrease in the drag coefficient but a possible Reynolds number dependency for the rounded corner configuration.
Resumo:
Dissolution of non-aqueous phase liquids (NAPLs) or gases into groundwater is a key process, both for contamination problems originating from organic liquid sources, and for dissolution trapping in geological storage of CO2. Dissolution in natural systems typically will involve both high and low NAPL saturations and a wide range of pore water flow velocities within the same source zone for dissolution to groundwater. To correctly predict dissolution in such complex systems and as the NAPL saturations change over time, models must be capable of predicting dissolution under a range of saturations and flow conditions. To provide data to test and validate such models, an experiment was conducted in a two-dimensional sand tank, where the dissolution of a spatially variable, 5x5 cm**2 DNAPL tetrachloroethene source was carefully measured using x-ray attenuation techniques at a resolution of 0.2x0.2 cm**2. By continuously measuring the NAPL saturations, the temporal evolution of DNAPL mass loss by dissolution to groundwater could be measured at each pixel. Next, a general dissolution and solute transport code was written and several published rate-limited (RL) dissolution models and a local equilibrium (LE) approach were tested against the experimental data. It was found that none of the models could adequately predict the observed dissolution pattern, particularly in the zones of higher NAPL saturation. Combining these models with a model for NAPL pool dissolution produced qualitatively better agreement with experimental data, but the total matching error was not significantly improved. A sensitivity study of commonly used fitting parameters further showed that several combinations of these parameters could produce equally good fits to the experimental observations. The results indicate that common empirical model formulations for RL dissolution may be inadequate in complex, variable saturation NAPL source zones, and that further model developments and testing is desirable.
Resumo:
Stroke is a prevalent disorder with immense socioeconomic impact. A variety of chronic neurological deficits result from stroke. In particular, sensorimotor deficits are a significant barrier to achieving post-stroke independence. Unfortunately, the majority of pre-clinical studies that show improved outcomes in animal stroke models have failed in clinical trials. Pre-clinical studies using non-human primate (NHP) stroke models prior to initiating human trials are a potential step to improving translation from animal studies to clinical trials. Robotic assessment tools represent a quantitative, reliable, and reproducible means to assess reaching behaviour following stroke in both humans and NHPs. We investigated the use of robotic technology to assess sensorimotor impairments in NHPs following middle cerebral artery occlusion (MCAO). Two cynomolgus macaques underwent transient MCAO for 90 minutes. Approximately 1.5 years following the procedure these NHPs and two non-stroke control monkeys were trained in a reaching task with both arms in the KINARM exoskeleton. This robot permits elbow and shoulder movements in the horizontal plane. The task required NHPs to make reaching movements from a centrally positioned start target to 1 of 8 peripheral targets uniformly distributed around the first target. We analyzed four movement parameters: reaction time, movement time (MT), initial direction error (IDE), and number of speed maxima to characterize sensorimotor deficiencies. We hypothesized reduced performance in these attributes during a neurobehavioural task with the paretic limb of NHPs following MCAO compared to controls. Reaching movements in the non-affected limbs of control and experimental NHPs showed bell-shaped velocity profiles. In contrast, the reaching movements with the affected limbs were highly variable. We found distinctive patterns in MT, IDE, and number of speed peaks between control and experimental monkeys and between limbs of NHPs with MCAO. NHPs with MCAO demonstrated more speed peaks, longer MTs, and greater IDE in their paretic limb compared to controls. These initial results qualitatively match human stroke subjects’ performance, suggesting that robotic neurobehavioural assessment in NHPs with stroke is feasible and could have translational relevance in subsequent human studies. Further studies will be necessary to replicate and expand on these preliminary findings.