4 resultados para Orthogonal Cutting Model
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
In my work I derive closed-form pricing formulas for volatility based options by suitably approximating the volatility process risk-neutral density function. I exploit and adapt the idea, which stands behind popular techniques already employed in the context of equity options such as Edgeworth and Gram-Charlier expansions, of approximating the underlying process as a sum of some particular polynomials weighted by a kernel, which is typically a Gaussian distribution. I propose instead a Gamma kernel to adapt the methodology to the context of volatility options. VIX vanilla options closed-form pricing formulas are derived and their accuracy is tested for the Heston model (1993) as well as for the jump-diffusion SVJJ model proposed by Duffie et al. (2000).
Resumo:
This thesis focuses on finding the optimum block cutting dimensions in terms of the environmental and economic factors by using a 3D algorithm for a limestone quarry in Foggia, Italy. The environmental concerns of quarrying operations are mainly: energy consumption, material waste, and pollution. The main economic concerns are the block recovery, the selling prices, and the production costs. Fractures adversely affect the block recovery ratio. With a fracture model, block production can be optimized. In this research, the waste volume produced by quarrying was minimised to increase the recovery ratio and ensure economic benefits. SlabCutOpt is a software developed at DICAM–University of Bologna for block cutting optimization which tests different cutting angles on the x-y-z planes to offer up alternative cutting methods. The program tests several block sizes and outputs the optimal result for each entry. By using SlabCutOpt, ten different block dimensions were analysed, the results indicated the maximum number of non-intersecting blocks for each dimension. After analysing the outputs, the block named number 1 with the dimensions ‘1mx1mx1m’ had the highest recovery ratio as 43% and the total Relative Money Value (RMV) with a value of 22829. Dimension number 1, also had the lowest waste volume, with a value of 3953.25 m3, for the total bench. For cutting the total bench volume of 6932.25m3, the diamond wire cutter had the lowest dust emission values for the block with the dimension ‘2mx2mx2m’, with a value of 24m3. When compared with the Eco-Label standards, block dimensions having surface area values lower than 15m2, were found to fit the natural resource waste criteria of the label, as the threshold required 25% of minimum recovery [1]. Due to the relativity of production costs, together with the Eco-Label threshold, the research recommends the selection of the blocks with a surface area value between 6m2 and 14m2.
Resumo:
In Beyond 5G technologies, Terahertz communications will be used: frequency bands between 100 GHz and 10 THz will be exploited in order to have higher throughput and lower latency. Those frequency bands suffer from several impairments, and it is thought that phase noise is one of the most significant. Orthogonal Chirp Division Multiplexing (OCDM) might be used in Beyond 5G communications, thanks to its robustness to multipath fading: it outperforms Orthogonal Frequency Division Multiplexing (OFDM) systems. The aim of this thesis is to find a suitable model for describing phase noise in Terahertz communications, and to study the performance of an OCDM system affected by this impairment. After this, a simple compensation scheme is introduced, and the improvement that it provides is analysed. The thesis is organized as follow: in the first chapter Terahertz communications and Beyond 5G are introduced, in the second chapter phase noise is studied, in the third chapter OCDM is analysed, and in the fourth chapter numerical results are presented.
Resumo:
Natural Language Processing (NLP) has seen tremendous improvements over the last few years. Transformer architectures achieved impressive results in almost any NLP task, such as Text Classification, Machine Translation, and Language Generation. As time went by, transformers continued to improve thanks to larger corpora and bigger networks, reaching hundreds of billions of parameters. Training and deploying such large models has become prohibitively expensive, such that only big high tech companies can afford to train those models. Therefore, a lot of research has been dedicated to reducing a model’s size. In this thesis, we investigate the effects of Vocabulary Transfer and Knowledge Distillation for compressing large Language Models. The goal is to combine these two methodologies to further compress models without significant loss of performance. In particular, we designed different combination strategies and conducted a series of experiments on different vertical domains (medical, legal, news) and downstream tasks (Text Classification and Named Entity Recognition). Four different methods involving Vocabulary Transfer (VIPI) with and without a Masked Language Modelling (MLM) step and with and without Knowledge Distillation are compared against a baseline that assigns random vectors to new elements of the vocabulary. Results indicate that VIPI effectively transfers information of the original vocabulary and that MLM is beneficial. It is also noted that both vocabulary transfer and knowledge distillation are orthogonal to one another and may be applied jointly. The application of knowledge distillation first before subsequently applying vocabulary transfer is recommended. Finally, model performance due to vocabulary transfer does not always show a consistent trend as the vocabulary size is reduced. Hence, the choice of vocabulary size should be empirically selected by evaluation on the downstream task similar to hyperparameter tuning.