71 resultados para Restorable load estimation
Resumo:
This study investigates futures market efficiency and optimal hedge ratio estimation. First, cointegration between spot and futures prices is studied using Johansen method, with two different model specifications. If prices are found cointegrated, restrictions on cointegrating vector and adjustment coefficients are imposed, to account for unbiasedness, weak exogeneity and prediction hypothesis. Second, optimal hedge ratios are estimated using static OLS, and time-varying DVEC and CCC models. In-sample and out-of-sample results for one, two and five period ahead are reported. The futures used in thesis are RTS index, EUR/RUB exchange rate and Brent oil, traded in Futures and options on RTS.(FORTS) For in-sample period, data points were acquired from start of trading of each futures contract, RTS index from August 2005, EUR/RUB exchange rate March 2009 and Brent oil October 2008, lasting till end of May 2011. Out-of-sample period covers start of June 2011, till end of December 2011. Our results indicate that all three asset pairs, spot and futures, are cointegrated. We found RTS index futures to be unbiased predictor of spot price, mixed evidence for exchange rate, and for Brent oil futures unbiasedness was not supported. Weak exogeneity results for all pairs indicated spot price to lead in price discovery process. Prediction hypothesis, unbiasedness and weak exogeneity of futures, was rejected for all asset pairs. Variance reduction results varied between assets, in-sample in range of 40-85 percent and out-of sample in range of 40-96 percent. Differences between models were found small, except for Brent oil in which OLS clearly dominated. Out-of-sample results indicated exceptionally high variance reduction for RTS index, approximately 95 percent.
Resumo:
Target company of this study is a large machinery company, which is, inter alia, engaged in energy and pulp engineering, procurement and construction management (EPCM) supply business. The main objective of this study was to develop cost estimation of the target company by providing more accurate, reliable and up-to-date information through enterprise resource planning (ERP) system. Another objective was to find cost-effective methods to collect total cost of ownership information to support more informed supplier selection decision making. This study is primarily action-oriented, but also constructive, and it can be divided in two sections: theoretical literature review and empirical study on the abovementioned part of the target company’s business. Development of information collection is, in addition to literature review, based on nearly 30 qualitative interviews of employees at various organizational units, functions and levels at the target company. At the core of development was to make initial data more accurate, reliable and available, a necessary prerequisite for informed use of the information. Certain development suggestions and paths were presented in order to regain confidence in ERP system as information source by reorganizing work breakdown structure and by complementing mere cost information with quantitative, technical and scope information. Several methods to use the information ever more effectively were also discussed. While implementation of the development suggestions outreached the scope of this study, it was forwarded in test environment and interest groups.
Resumo:
Bone strain plays a major role as the activation signal for the bone (re)modeling process, which is vital for keeping bones healthy. Maintaining high bone mineral density reduces the chances of fracture in the event of an accident. Numerous studies have shown that bones can be strengthened with physical exercise. Several hypotheses have asserted that a stronger osteogenic (bone producing) effect results from dynamic exercise than from static exercise. These previous studies are based on short-term empirical research, which provide the motivation for justifying the experimental results with a solid mathematical background. The computer simulation techniques utilized in this work allow for non-invasive bone strain estimation during physical activity at any bone site within the human skeleton. All models presented in the study are threedimensional and actuated by muscle models to replicate the real conditions accurately. The objective of this work is to determine and present loading-induced bone strain values resulting from physical activity. It includes a comparison of strain resulting from four different gym exercises (knee flexion, knee extension, leg press, and squat) and walking, with the results reported for walking and jogging obtained from in-vivo measurements described in the literature. The objective is realized primarily by carrying out flexible multibody dynamics computer simulations. The dissertation combines the knowledge of finite element analysis and multibody simulations with experimental data and information available from medical field literature. Measured subject-specific motion data was coupled with forward dynamics simulation to provide natural skeletal movement. Bone geometries were defined using a reverse engineering approach based on medical imaging techniques. Both computed tomography and magnetic resonance imaging were utilized to explore modeling differences. The predicted tibia bone strains during walking show good agreement with invivo studies found in the literature. Strain measurements were not available for gym exercises; therefore, the strain results could not be validated. However, the values seem reasonable when compared to available walking and running invivo strain measurements. The results can be used for exercise equipment design aimed at strengthening the bones as well as the muscles during workout. Clinical applications in post fracture recovery exercising programs could also be the target. In addition, the methodology introduced in this study, can be applied to investigate the effect of weightlessness on astronauts, who often suffer bone loss after long time spent in the outer space.
Resumo:
Parameter estimation still remains a challenge in many important applications. There is a need to develop methods that utilize achievements in modern computational systems with growing capabilities. Owing to this fact different kinds of Evolutionary Algorithms are becoming an especially perspective field of research. The main aim of this thesis is to explore theoretical aspects of a specific type of Evolutionary Algorithms class, the Differential Evolution (DE) method, and implement this algorithm as codes capable to solve a large range of problems. Matlab, a numerical computing environment provided by MathWorks inc., has been utilized for this purpose. Our implementation empirically demonstrates the benefits of a stochastic optimizers with respect to deterministic optimizers in case of stochastic and chaotic problems. Furthermore, the advanced features of Differential Evolution are discussed as well as taken into account in the Matlab realization. Test "toycase" examples are presented in order to show advantages and disadvantages caused by additional aspects involved in extensions of the basic algorithm. Another aim of this paper is to apply the DE approach to the parameter estimation problem of the system exhibiting chaotic behavior, where the well-known Lorenz system with specific set of parameter values is taken as an example. Finally, the DE approach for estimation of chaotic dynamics is compared to the Ensemble prediction and parameter estimation system (EPPES) approach which was recently proposed as a possible solution for similar problems.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
This thesis presents a set of methods and models for estimation of iron and slag flows in the blast furnace hearth and taphole. The main focus was put on predicting taphole flow patterns and estimating the effects of various taphole conditions on the drainage behavior of the blast furnace hearth. All models were based on a general understanding of the typical tap cycle of an industrial blast furnace. Some of the models were evaluated on short-term process data from the reference furnace. A computational fluid dynamics (CFD) model was built and applied to simulate the complicated hearth flows and thus to predict the regions of the hearth exerted to erosion under various operating conditions. Key boundary variables of the CFD model were provided by a simplified drainage model based on the first principles. By examining the evolutions of liquid outflow rates measured from the furnace studied, the drainage model was improved to include the effects of taphole diameter and length. The estimated slag delays showed good agreement with the observed ones. The liquid flows in the taphole were further studied using two different models and the results of both models indicated that it is more likely that separated flow of iron and slag occurs in the taphole when the liquid outflow rates are comparable during tapping. The drainage process was simulated with an integrated model based on an overall balance analysis: The high in-furnace overpressure can compensate for the resistances induced by the liquid flows in the hearth and through the taphole. Finally, a recently developed multiphase CFD model including interfacial forces between immiscible liquids was developed and both the actual iron-slag system and a water-oil system in laboratory scale were simulated. The model was demonstrated to be a useful tool for simulating hearth flows for gaining understanding of the complex phenomena in the drainage of the blast furnace.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
The recent emergence of low-cost RGB-D sensors has brought new opportunities for robotics by providing affordable devices that can provide synchronized images with both color and depth information. In this thesis, recent work on pose estimation utilizing RGBD sensors is reviewed. Also, a pose recognition system for rigid objects using RGB-D data is implemented. The implementation uses half-edge primitives extracted from the RGB-D images for pose estimation. The system is based on the probabilistic object representation framework by Detry et al., which utilizes Nonparametric Belief Propagation for pose inference. Experiments are performed on household objects to evaluate the performance and robustness of the system.
Resumo:
Eutrophication caused by anthropogenic nutrient pollution has become one of the most severe threats to water bodies. Nutrients enter water bodies from atmospheric precipitation, industrial and domestic wastewaters and surface runoff from agricultural and forest areas. As point pollution has been significantly reduced in developed countries in recent decades, agricultural non-point sources have been increasingly identified as the largest source of nutrient loading in water bodies. In this study, Lake Säkylän Pyhäjärvi and its catchment are studied as an example of a long-term, voluntary-based, co-operative model of lake and catchment management. Lake Pyhäjärvi is located in the centre of an intensive agricultural area in southwestern Finland. More than 20 professional fishermen operate in the lake area, and the lake is used as a drinking water source and for various recreational activities. Lake Pyhäjärvi is a good example of a large and shallow lake that suffers from eutrophication and is subject to measures to improve this undesired state under changing conditions. Climate change is one of the most important challenges faced by Lake Pyhäjärvi and other water bodies. The results show that climatic variation affects the amounts of runoff and nutrient loading and their timing during the year. The findings from the study area concerning warm winters and their influences on nutrient loading are in accordance with the IPCC scenarios of future climate change. In addition to nutrient reduction measures, the restoration of food chains (biomanipulation) is a key method in water quality management. The food-web structure in Lake Pyhäjärvi has, however, become disturbed due to mild winters, short ice cover and low fish catch. Ice cover that enables winter seining is extremely important to the water quality and ecosystem of Lake Pyhäjärvi, as the vendace stock is one of the key factors affecting the food web and the state of the lake. New methods for the reduction of nutrient loading and the treatment of runoff waters from agriculture, such as sand filters, were tested in field conditions. The results confirm that the filter technique is an applicable method for nutrient reduction, but further development is needed. The ability of sand filters to absorb nutrients can be improved with nutrient binding compounds, such as lime. Long-term hydrological, chemical and biological research and monitoring data on Lake Pyhäjärvi and its catchment provide a basis for water protection measures and improve our understanding of the complicated physical, chemical and biological interactions between the terrestrial and aquatic realms. In addition to measurements carried out in field conditions, Lake Pyhäjärvi and its catchment were studied using various modelling methods. In the calibration and validation of models, long-term and wide-ranging time series data proved to be valuable. Collaboration between researchers, modellers and local water managers further improves the reliability and usefulness of models. Lake Pyhäjärvi and its catchment can also be regarded as a good research laboratory from the point of view of the Baltic Sea. The main problem in both of them is eutrophication caused by excess nutrients, and nutrient loading has to be reduced – especially from agriculture. Mitigation measures are also similar in both cases.
Resumo:
The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.
Resumo:
In the doctoral dissertation, low-voltage direct current (LVDC) distribution system stability, supply security and power quality are evaluated by computational modelling and measurements on an LVDC research platform. Computational models for the LVDC network analysis are developed. Time-domain simulation models are implemented in the time-domain simulation environment PSCAD/EMTDC. The PSCAD/EMTDC models of the LVDC network are applied to the transient behaviour and power quality studies. The LVDC network power loss model is developed in a MATLAB environment and is capable of fast estimation of the network and component power losses. The model integrates analytical equations that describe the power loss mechanism of the network components with power flow calculations. For an LVDC network research platform, a monitoring and control software solution is developed. The solution is used to deliver measurement data for verification of the developed models and analysis of the modelling results. In the work, the power loss mechanism of the LVDC network components and its main dependencies are described. Energy loss distribution of the LVDC network components is presented. Power quality measurements and current spectra are provided and harmonic pollution on the DC network is analysed. The transient behaviour of the network is verified through time-domain simulations. DC capacitor guidelines for an LVDC power distribution network are introduced. The power loss analysis results show that one of the main optimisation targets for an LVDC power distribution network should be reduction of the no-load losses and efficiency improvement of converters at partial loads. Low-frequency spectra of the network voltages and currents are shown, and harmonic propagation is analysed. Power quality in the LVDC network point of common coupling (PCC) is discussed. Power quality standard requirements are shown to be met by the LVDC network. The network behaviour during transients is analysed by time-domain simulations. The network is shown to be transient stable during large-scale disturbances. Measurement results on the LVDC research platform proving this are presented in the work.
Resumo:
The aim of this work is to apply approximate Bayesian computation in combination with Marcov chain Monte Carlo methods in order to estimate the parameters of tuberculosis transmission. The methods are applied to San Francisco data and the results are compared with the outcomes of previous works. Moreover, a methodological idea with the aim to reduce computational time is also described. Despite the fact that this approach is proved to work in an appropriate way, further analysis is needed to understand and test its behaviour in different cases. Some related suggestions to its further enhancement are described in the corresponding chapter.
Resumo:
Epithelial ovarian cancer (EOC) is usually diagnosed in an advanced stage. The prognosis depends highly on the amount of the residual tumor in surgery. In patients with extensive disease, neoadjuvant chemotherapy (NACT) is used to diminish the tumor load before debulking surgery. New non-invasive methods are needed to preoperatively evaluate the disease dissemination and operability. [18F] FDG PET/CT (Positron emission tomography/computed tomography) is a promising method for cancer diagnostics and staging. The biomarker profiles during treatment can predict patient’s outcome. This prospective study included 41 EOC patients, 21 treated with primary surgery and 20 with NACT and interval surgery. The performances of preoperative contrast enhanced PET/CT (PET/ceCT) and diagnostic CT (ceCT) were compared. Perioperative visual estimation of tumor spread was studied in primary and interval surgery. The profile of the serum marker HE4 (Human epididymis 4) during primary chemotherapy was evaluated. In primary surgery, surgical findings were found to form an adequate reference standard for imaging studies. After NACT, the sensitivity for visual estimation of cancer dissemination was significantly worse. Preoperative PET/ceCT was more effective than ceCT alone in detecting extra-abdominal disease spread. The high number of supradiaphragmatic lymph node metastases detected by PET/ceCT at the time of diagnosis brings new insight in EOC spread patterns. The sensitivity of both PET/CT and ceCT remained modest in intra-abdominal areas important to operability. The HE4 profile was in concordance with the CA125 profile during primary chemotherapy. Its role in the evaluation of EOC chemotherapy response will be clarified in further studies.
Resumo:
Nykyajan jatkuvasti kiristyvät päästörajoitukset ja ilmastonmuutoksen uhka ovat ajavia voimia kehittämään voimalaitosten tekniikkaa energiatehokkaampaan ja ympäristöystävällisempään suuntaan. Polttomoottoritekniikan parantaminen on tärkeä osa tätä kehitystä, mutta jo nykyisiä moottoreita voitaisiin ajaa energiate-hokkaammin käyttämällä akustoa ja älykästä säätöjärjestelmää apuna. Työssä tutkitaan simulaatioiden avulla voidaanko ulkomerellä toimivan huolto-aluksen energiatehokkuutta parantaa muokkaamalla sen tehon tuottoa keskitehoes-timaattorin ja akuston avulla.
Resumo:
The objective of this Master’s thesis is to develop a model which estimates net working capital (NWC) monthly in a year period. The study is conducted by a constructive research which uses a case study. The estimation model is designed in the need of one case company which operates in project business. Net working capital components should be linked together by an automatic model and estimated individually, including advanced components of NWC for example POC receivables. Net working capital estimation model of this study contains three parts: output template, input template and calculation model. The output template gets estimate values automatically from the input template and the calculation model. Into the input template estimate values of more stable NWC components are inputted manually. The calculate model gets estimate values for major affecting components automatically from the systems of a company by using a historical data and made plans. As a precondition for the functionality of the estimation calculation is that sales are estimated in one year period because the sales are linked to all NWC components.