871 resultados para Variable pricing model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines assumptions about future prices used in real estate applications of DCF models. We confirm both the widespread reliance on an ad hoc rule of increasing period-zero capitalization rates by 50 to 100 basis points to obtain terminal capitalization rates and the inability of the rule to project future real estate pricing. To understand how investors form expectations about future prices, we model the spread between the contemporaneously period-zero going-in and terminal capitalization rates and the spread between terminal rates assigned in period zero and going-in rates assigned in period N. Our regression results confirm statistical relationships between the terminal and next holding period going-in capitalization rate spread and the period-zero discount rate, although other economically significant variables are statistically insignificant. Linking terminal capitalization rates by assumption to going-in capitalization rates implies investors view future real estate pricing with myopic expectations. We discuss alternative specifications devoid of such linkage that align more with a rational expectations view of future real estate pricing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El Mercado de Renta Variable en Colombia sigue estando en desarrollo, así como la confianza de los inversionistas a la hora de tomar decisiones de elección de portafolios óptimos de inversión, los cuales le brinden la maximización de los retornos esperados a un mínimo riesgo. Por lo anterior esta investigación explora y conoce más a fondo los sectores que conforman el mercado accionario y determina cual es más rentable que otro, a través del modelo propuesto por Harry Markowitz y teniendo en cuenta los avances a la teoría hecha por Sharpe a través del índice de Sharpe y Betas. Entre los sectores que conforman El Mercado de Renta Variable en Colombia está el Financiero, Materiales, Energía, Consumo Básico Servicios e Industrial; los cuales siguen la misma tendencia bajista que el Índice del Colcap, el cual en los últimos años está rentando negativamente. Por lo tanto con esta investigación el lector e inversionista cuenta con herramientas que aplican el modelo de Markowitz para vislumbrar de acuerdo a datos históricos, los sectores en los cuales se recomienda invertir y en los que por el contrario de acuerdo a la tendencia de debe desistir. Sin embargo, se aclara que esta investigación se basa en datos históricos, tendencias y cálculos matemáticos que pueden diferenciarse de la realidad actual, dado que por aspectos coyunturales económicos, políticos o sociales puede verse afectadas las rentabilidades de las acciones y sectores en los que decida invertir las personas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the study was to explore how a public, IT services transferor, organization, comprised of autonomous entities, can effectively develop and organize its data center cost recovery mechanisms in a fair manner. The lack of a well-defined model for charges and a cost recovery scheme could cause various problems. For example one entity may be subsidizing the costs of another entity(s). Transfer pricing is in the best interest of each autonomous entity in a CCA. While transfer pricing plays a pivotal role in the price settings of services and intangible assets, TCE focuses on the arrangement at the boundary between entities. TCE is concerned with the costs, autonomy, and cooperation issues of an organization. The theory is concern with the factors that influence intra-firm transaction costs and attempting to manifest the problems involved in the determination of the charges or prices of the transactions. This study was carried out, as a single case study, in a public organization. The organization intended to transfer the IT services of its own affiliated public entities and was in the process of establishing a municipal-joint data center. Nine semi-structured interviews, including two pilot interviews, were conducted with the experts and managers of the case company and its affiliating entities. The purpose of these interviews was to explore the charging and pricing issues of the intra-firm transactions. In order to process and summarize the findings, this study employed qualitative techniques with the multiple methods of data collection. The study, by reviewing the TCE theory and a sample of transfer pricing literature, created an IT services pricing framework as a conceptual tool for illustrating the structure of transferring costs. Antecedents and consequences of the transfer price based on TCE were developed. An explanatory fair charging model was eventually developed and suggested. The findings of the study suggested that the Chargeback system was inappropriate scheme for an organization with affiliated autonomous entities. The main contribution of the study was the application of TP methodologies in the public sphere with no tax issues consideration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations of Caspian Sea during August - September 1995 are used to develop a three dimensional numerical for calculating temperature and current. This period was chosen because of extensive set of observational data including surface temperature observations. Data from the meteorological buoy network on Caspian Sea are combined with routine observation at first order synoptic station around the lake to obtain hourly values of wind stress and pressure fields. Initial temperature distribution as a function of depth and horizontal coordinates are derived from ship cruises. The model has variable grid resolution and horizontal smoothing which filters out small scale vertical motion. The hydrodynamic model of Caspian Sea has 6 vertical levels and a uniform horizontal grid size of 50 km The model is driven with surface fluxes of heat and momentum derived from observed meteorological. The model was able to reproduce all of the basic feature of the thermal structure in Caspian sea and: larger scale circulation patterns tend to be cyclone, with cyclone circulation with each sub basin. Result has agreement with observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The financial crisis of 2007-2008 led to extraordinary government intervention in firms and markets. The scope and depth of government action rivaled that of the Great Depression. Many traded markets experienced dramatic declines in liquidity leading to the existence of conditions normally assumed to be promptly removed via the actions of profit seeking arbitrageurs. These extreme events motivate the three essays in this work. The first essay seeks and fails to find evidence of investor behavior consistent with the broad 'Too Big To Fail' policies enacted during the crisis by government agents. Only in limited circumstances, where government guarantees such as deposit insurance or U.S. Treasury lending lines already existed, did investors impart a premium to the debt security prices of firms under stress. The second essay introduces the Inflation Indexed Swap Basis (IIS Basis) in examining the large differences between cash and derivative markets based upon future U.S. inflation as measured by the Consumer Price Index (CPI). It reports the consistent positive value of this measure as well as the very large positive values it reached in the fourth quarter of 2008 after Lehman Brothers went bankrupt. It concludes that the IIS Basis continues to exist due to limitations in market liquidity and hedging alternatives. The third essay explores the methodology of performing debt based event studies utilizing credit default swaps (CDS). It provides practical implementation advice to researchers to address limited source data and/or small target firm sample size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation I quantify residential behavior response to interventions designed to reduce electricity demand at different periods of the day. In the first chapter, I examine the effect of information provision coupled with bimonthly billing, monthly billing, and in-home displays, as well as a time-of-use (TOU) pricing scheme to measure consumption over each month of the Irish Consumer Behavior Trial. I find that time-of-use pricing with real time usage information reduces electricity usage up to 8.7 percent during peak times at the start of the trial but the effect decays over the first three months and after three months the in-home display group is indistinguishable from the monthly treatment group. Monthly and bi-monthly billing treatments are not found to be statistically different from another. These findings suggest that increasing billing reports to the monthly level may be more cost effective for electricity generators who wish to decrease expenses and consumption, rather than providing in-home displays. In the following chapter, I examine the response of residential households after exposure to time of use tariffs at different hours of the day. I find that these treatments reduce electricity consumption during peak hours by almost four percent, significantly lowering demand. Within the model, I find evidence of overall conservation in electricity used. In addition, weekday peak reductions appear to carry over to the weekend when peak pricing is not present, suggesting changes in consumer habit. The final chapter of my dissertation imposes a system wide time of use plan to analyze the potential reduction in carbon emissions from load shifting based on the Ireland and Northern Single Electricity Market. I find that CO2 emissions savings are highest during the winter months when load demand is highest and dirtier power plants are scheduled to meet peak demand. TOU pricing allows for shifting in usage from peak usage to off peak usage and this shift in load can be met with cleaner and cheaper generated electricity from imports, high efficiency gas units, and hydro units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation mainly focuses on coordinated pricing and inventory management problems, where the related background is provided in Chapter 1. Several periodic-review models are then discussed in Chapters 2,3,4 and 5, respectively. Chapter 2 analyzes a deterministic single-product model, where a price adjustment cost incurs if the current selling price is changed from the previous period. We develop exact algorithms for the problem under different conditions and find out that computation complexity varies significantly associated with the cost structure. %Moreover, our numerical study indicates that dynamic pricing strategies may outperform static pricing strategies even when price adjustment cost accounts for a significant portion of the total profit. Chapter 3 develops a single-product model in which demand of a period depends not only on the current selling price but also on past prices through the so-called reference price. Strongly polynomial time algorithms are designed for the case without no fixed ordering cost, and a heuristic is proposed for the general case together with an error bound estimation. Moreover, our illustrates through numerical studies that incorporating reference price effect into coordinated pricing and inventory models can have a significant impact on firms' profits. Chapter 4 discusses the stochastic version of the model in Chapter 3 when customers are loss averse. It extends the associated results developed in literature and proves that the reference price dependent base-stock policy is proved to be optimal under a certain conditions. Instead of dealing with specific problems, Chapter 5 establishes the preservation of supermodularity in a class of optimization problems. This property and its extensions include several existing results in the literature as special cases, and provide powerful tools as we illustrate their applications to several operations problems: the stochastic two-product model with cross-price effects, the two-stage inventory control model, and the self-financing model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aims of this thesis were evaluation the type of wave channel, wave current, and effect of some parameters on them and identification and comparison between types of wave maker in laboratory situations. In this study, designing and making of two dimension channels (flume) and wave maker for experiment son the marine buoy, marine building and energy conversion systems were also investigated. In current research, the physical relation between pump and pumpage and the designing of current making in flume were evaluated. The related calculation for steel building, channels beside glasses and also equations of wave maker plate movement, power of motor and absorb wave(co astal slope) were calculated. In continue of this study, the servo motor was designed and applied for moving of wave maker’s plate. One Ball Screw Leaner was used for having better movement mechanisms of equipment and convert of the around movement to linear movement. The Programmable Logic Controller (PLC) was also used for control of wave maker system. The studies were explained type of ocean energies and energy conversion systems. In another part of this research, the systems of energy resistance in special way of Oscillating Water Column (OWC) were explained and one sample model was designed and applied in hydrolic channel at the Sheikh Bahaii building in Azad University, Science and Research Branch. The dimensions of designed flume was considered at 16 1.98 0. 57 m which had ability to provide regular waves as well as irregular waves with little changing on the control system. The ability of making waves was evaluated in our designed channel and the results were showed that all of the calculation in designed flume was correct. The mean of error between our results and theory calculation was conducted 7%, which was showed the well result in this situation. With evaluating of designed OWC model and considering of changes in the some part of system, one bigger sample of this model can be used for designing the energy conversion system model. The obtained results showed that the best form for chamber in exit position of system, were zero degree (0) in angle for moving below part, forty and five (45) degree in front wall of system and the moving forward of front wall keep in two times of height of wave.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Ph.D. thesis contains 4 essays in mathematical finance with a focus on pricing Asian option (Chapter 4), pricing futures and futures option (Chapter 5 and Chapter 6) and time dependent volatility in futures option (Chapter 7). In Chapter 4, the applicability of the Albrecher et al.(2005)'s comonotonicity approach was investigated in the context of various benchmark models for equities and com- modities. Instead of classical Levy models as in Albrecher et al.(2005), the focus is the Heston stochastic volatility model, the constant elasticity of variance (CEV) model and the Schwartz (1997) two-factor model. It is shown that the method delivers rather tight upper bounds for the prices of Asian Options in these models and as a by-product delivers super-hedging strategies which can be easily implemented. In Chapter 5, two types of three-factor models were studied to give the value of com- modities futures contracts, which allow volatility to be stochastic. Both these two models have closed-form solutions for futures contracts price. However, it is shown that Model 2 is better than Model 1 theoretically and also performs very well empiri- cally. Moreover, Model 2 can easily be implemented in practice. In comparison to the Schwartz (1997) two-factor model, it is shown that Model 2 has its unique advantages; hence, it is also a good choice to price the value of commodity futures contracts. Fur- thermore, if these two models are used at the same time, a more accurate price for commodity futures contracts can be obtained in most situations. In Chapter 6, the applicability of the asymptotic approach developed in Fouque et al.(2000b) was investigated for pricing commodity futures options in a Schwartz (1997) multi-factor model, featuring both stochastic convenience yield and stochastic volatility. It is shown that the zero-order term in the expansion coincides with the Schwartz (1997) two-factor term, with averaged volatility, and an explicit expression for the first-order correction term is provided. With empirical data from the natural gas futures market, it is also demonstrated that a significantly better calibration can be achieved by using the correction term as compared to the standard Schwartz (1997) two-factor expression, at virtually no extra effort. In Chapter 7, a new pricing formula is derived for futures options in the Schwartz (1997) two-factor model with time dependent spot volatility. The pricing formula can also be used to find the result of the time dependent spot volatility with futures options prices in the market. Furthermore, the limitations of the method that is used to find the time dependent spot volatility will be explained, and it is also shown how to make sure of its accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we use the approximation of shallow water waves (Margaritondo G 2005 Eur. J. Phys. 26 401) to understand the behaviour of a tsunami in a variable depth. We deduce the shallow water wave equation and the continuity equation that must be satisfied when a wave encounters a discontinuity in the sea depth. A short explanation about how the tsunami hit the west coast of India is given based on the refraction phenomenon. Our procedure also includes a simple numerical calculation suitable for undergraduate students in physics and engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mental stress is known to disrupt the execution of motor performance and can lead to decrements in the quality of performance, however, individuals have shown significant differences regarding how fast and well they can perform a skilled task according to how well they can manage stress and emotion. The purpose of this study was to advance our understanding of how the brain modulates emotional reactivity under different motivational states to achieve differential performance in a target shooting task that requires precision visuomotor coordination. In order to study the interactions in emotion regulatory brain areas (i.e. the ventral striatum, amygdala, prefrontal cortex) and the autonomic nervous system, reward and punishment interventions were employed and the resulting behavioral and physiological responses contrasted to observe the changes in shooting performance (i.e. shooting accuracy and stability of aim) and neuro-cognitive processes (i.e. cognitive load and reserve) during the shooting task. Thirty-five participants, aged 18 to 38 years, from the Reserve Officers’ Training Corp (ROTC) at the University of Maryland were recruited to take 30 shots at a bullseye target in three different experimental conditions. In the reward condition, $1 was added to their total balance for every 10-point shot. In the punishment condition, $1 was deducted from their total balance if they did not hit the 10-point area. In the neutral condition, no money was added or deducted from their total balance. When in the reward condition, which was reportedly most enjoyable and least stressful of the conditions, heart rate variability was found to be positively related to shooting scores, inversely related to variability in shooting performance and positively related to alpha power (i.e. less activation) in the left temporal region. In the punishment (and most stressful) condition, an increase in sympathetic response (i.e. increased LF/HF ratio) was positively related to jerking movements as well as variability of placement (on the target) in the shots taken. This, coupled with error monitoring activity in the anterior cingulate cortex, suggests evaluation of self-efficacy might be driving arousal regulation, thus affecting shooting performance. Better performers showed variable, increasing high-alpha power in the temporal region during the aiming period towards taking the shot which could indicate an adaptive strategy of engagement. They also showed lower coherence during hit shots than missed shots which was coupled with reduced jerking movements and better precision and accuracy. Frontal asymmetry measures revealed possible influence of the prefrontal lobe in driving this effect in reward and neutral conditions. The possible interactions, reasons behind these findings and implications are discussed.