916 resultados para Options (Finance) -- Mathematical models
Resumo:
According to the significance of the econometric models in foreign exchange market, the purpose of this research is to give a closer examination on some important issues in this area. The research covers exchange rate pass-through into import prices, liquidity risk and expected returns in the currency market, and the common risk factors in currency markets. Firstly, with the significant of the exchange rate pass-through in financial economics, the first empirical chapter studies on the degree of exchange rate pass-through into import in emerging economies and developed countries in panel evidences for comparison covering the time period of 1970-2009. The pooled mean group estimation (PMGE) is used for the estimation to investigate the short run coefficients and error variance. In general, the results present that the import prices are affected positively, though incompletely, by the exchange rate. Secondly, the following study addresses the question whether there is a relationship between cross-sectional differences in foreign exchange returns and the sensitivities of the returns to fluctuations in liquidity, known as liquidity beta, by using a unique dataset of weekly order flow. Finally, the last study is in keeping with the study of Lustig, Roussanov and Verdelhan (2011), which shows that the large co-movement among exchange rates of different currencies can explain a risk-based view of exchange rate determination. The exploration on identifying a slope factor in exchange rate changes is brought up. The study initially constructs monthly portfolios of currencies, which are sorted on the basis of their forward discounts. The lowest interest rate currencies are contained in the first portfolio and the highest interest rate currencies are in the last. The results performs that portfolios with higher forward discounts incline to contain higher real interest rates in overall by considering the first portfolio and the last portfolio though the fluctuation occurs.
Resumo:
This paper aims to broaden the applicability of the assessment methodology of investment projects through real options as a key element for investment decision making -- Traditional project valuation methodologies are described and their gaps, which special characteristic is uncertainty, are presented -- A parallel between financial and real options that could be used for valuation is made, using the binomial tree method -- Finally, a case study in the construction sector shows a project valuation using expand and waiting options
Resumo:
The release of ultrafine particles (UFP) from laser printers and office equipment was analyzed using a particle counter (FMPS; Fast Mobility Particle Sizer) with a high time resolution, as well as the appropriate mathematical models. Measurements were carried out in a 1 m³ chamber, a 24 m³ chamber and an office. The time-dependent emission rates were calculated for these environments using a deconvolution model, after which the total amount of emitted particles was calculated. The total amounts of released particles were found to be independent of the environmental parameters and therefore, in principle, they were appropriate for the comparison of different printers. On the basis of the time-dependent emission rates, “initial burst” emitters and constant emitters could also be distinguished. In the case of an “initial burst” emitter, the comparison to other devices is generally affected by strong variations between individual measurements. When conducting exposure assessments for UFP in an office, the spatial distribution of the particles also had to be considered. In this work, the spatial distribution was predicted on a case by case basis, using CFD simulation.
Resumo:
Experiments were undertaken to study drying kinetics of moist cylindrical shaped food particulates during fluidised bed drying. Cylindrical particles were prepared from Green beans with three different length:diameter ratios, 3:1, 2:1 and 1:1. A batch fluidised bed dryer connected to a heat pump system was used for the experimentation. A Heat pump and fluid bed combination was used to increase overall energy efficiency and achieve higher drying rates. Drying kinetics, were evaluated with non-dimensional moisture at three different drying temperatures of 30, 40 and 50o C. Numerous mathematical models can be used to calculate drying kinetics ranging from analytical models with simplified assumptions to empirical models built by regression using experimental data. Empirical models are commonly used for various food materials due to their simpler approach. However problems in accuracy, limits the applications of empirical models. Some limitations of empirical models could be reduced by using semi-empirical models based on heat and mass transfer of the drying operation. One such method is the quasi-stationary approach. In this study, a modified quasi-stationary approach was used to model drying kinetics of the cylindrical food particles at three drying temperatures.
Resumo:
The healing process for bone fractures is sensitive to mechanical stability and blood supply at the fracture site. Most currently available mechanobiological algorithms of bone healing are based solely on mechanical stimuli, while the explicit analysis of revascularization and its influences on the healing process have not been thoroughly investigated in the literature. In this paper, revascularization was described by two separate processes: angiogenesis and nutrition supply. The mathematical models for angiogenesis and nutrition supply have been proposed and integrated into an existing fuzzy algorithm of fracture healing. The computational algorithm of fracture healing, consisting of stress analysis, analyses of angiogenesis and nutrient supply, and tissue differentiation, has been tested on and compared with animal experimental results published previously. The simulation results showed that, for a small and medium-sized fracture gap, the nutrient supply is sufficient for bone healing, for a large fracture gap, non-union may be induced either by deficient nutrient supply or inadequate mechanical conditions. The comparisons with experimental results demonstrated that the improved computational algorithm is able to simulate a broad spectrum of fracture healing cases and to predict and explain delayed unions and non-union induced by large gap sizes and different mechanical conditions. The new algorithm will allow the simulation of more realistic clinical fracture healing cases with various fracture gaps and geometries and may be helpful to optimise implants and methods for fracture fixation.
Resumo:
In sport and exercise biomechanics, forward dynamics analyses or simulations have frequently been used in attempts to establish optimal techniques for performance of a wide range of motor activities. However, the accuracy and validity of these simulations is largely dependent on the complexity of the mathematical model used to represent the neuromusculoskeletal system. It could be argued that complex mathematical models are superior to simple mathematical models as they enable basic mechanical insights to be made and individual-specific optimal movement solutions to be identified. Contrary to some claims in the literature, however, we suggest that it is currently not possible to identify the complete optimal solution for a given motor activity. For a complete optimization of human motion, dynamical systems theory implies that mathematical models must incorporate a much wider range of organismic, environmental and task constraints. These ideas encapsulate why sports medicine specialists need to adopt more individualized clinical assessment procedures in interpreting why performers' movement patterns may differ.
Resumo:
Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Appropriate mathematical models that are capable of estimating times to failures and the probability of failures in the future are essential in EAM. In most real-life situations, the lifetime of an engineering asset is influenced and/or indicated by different factors that are termed as covariates. Hazard prediction with covariates is an elemental notion in the reliability theory to estimate the tendency of an engineering asset failing instantaneously beyond the current time assumed that it has already survived up to the current time. A number of statistical covariate-based hazard models have been developed. However, none of them has explicitly incorporated both external and internal covariates into one model. This paper introduces a novel covariate-based hazard model to address this concern. This model is named as Explicit Hazard Model (EHM). Both the semi-parametric and non-parametric forms of this model are presented in the paper. The major purpose of this paper is to illustrate the theoretical development of EHM. Due to page limitation, a case study with the reliability field data is presented in the applications part of this study.
Resumo:
The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create drowsiness or hypovigilance and impair the ability to react to critical events. Identifying vigilance decrement in monotonous conditions has been a major subject of research, but no research to date has attempted to predict this vigilance decrement. This pilot study aims to show that vigilance decrements due to monotonous tasks can be predicted through mathematical modelling. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants’ performance. This task models the driver’s ability to cope with unpredicted events by performing the expected action. A Hidden Markov Model (HMM) is proposed to predict participants’ hypovigilance. Driver’s vigilance evolution is modelled as a hidden state and is correlated to an observable variable: the participant’s reactions time. This experiment shows that the monotony of the task can lead to an important vigilance decline in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.
Resumo:
Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia
Resumo:
Experiments were undertaken to study drying kinetics of different shaped moist food particulates during heat pump assisted fluidised bed drying. Three particular geometrical shapes of parallelepiped, cylindrical and spheres were selected from potatoes (aspect ratio = 1:1, 2:1, 3:1), cut beans (length: diameter = 1:1, 2:1, 3:1) and peas respectively. A batch fluidised bed dryer connected to a heat pump system was used for the experimentation. A Heat pump and fluid bed combination was used to increase overall energy efficiency and achieve higher drying rates. Drying kinetics, were evaluated with non-dimensional moisture at three different drying temperatures of 30, 40 and 50o C. Due to complex hydrodynamics of the fluidised beds, drying kinetics are dryer or material specific. Numerous mathematical models can be used to calculate drying kinetics ranging from analytical models with simplified assumptions to empirical models built by regression using experimental data. Empirical models are commonly used for various food materials due to their simpler approach. However problems in accuracy, limits the applications of empirical models. Some limitations of empirical models could be reduced by using semi-empirical models based on heat and mass transfer of the drying operation. One such method is the quasi-stationary approach. In this study, a modified quasi-stationary approach was used to model drying kinetics of the cylindrical food particles at three drying temperatures.