45 resultados para Estimation stature
Resumo:
Työn tavoitteena oli kehittää tutkittavan insinööriyksikön projektien kustannusestimointiprosessia, siten että yksikön johdolla olisi tulevaisuudessa käytettävänään tarkempaa kustannustietoa. Jotta tämä olisi mahdollista, ensin täytyi selvittää yksikön toimintatavat, projektien kustannusrakenteet sekä kustannusatribuutit. Tämän teki mahdolliseksi projektien kustannushistoriatiedon tutkiminen sekä asiantuntijoiden haastattelu. Työn tuloksena syntyi kohdeyksikön muiden prosessien kanssa yhteensopiva kustannusestimointiprosessi sekä –malli.Kustannusestimointimenetelmän ja –mallin perustana on kustannusatribuutit, jotka määritellään erikseen tutkittavassa ympäristössä. Kustannusatribuutit löydetään historiatietoa tutkimalla, eli analysoimalla jo päättyneitä projekteja, projektien kustannusrakenteita sekä tekijöitä, jotka ovat vaikuttaneet kustannusten syntyyn. Tämän jälkeen kustannusatribuuteille täytyy määritellä painoarvot sekä painoarvojen vaihteluvälit. Estimointimallin tarkuutta voidaan parantaa mallin kalibroinnilla. Olen käyttänyt Goal – Question – Metric (GQM) –menetelmää tutkimuksen kehyksenä.
Resumo:
In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.
Resumo:
Cost estimation is an important, but challenging process when designing a new product or a feature of it, verifying the product prices given by suppliers or planning a cost saving actions of existing products. It is even more challenging when the product is highly modular, not a bulk product. In general, cost estimation techniques can be divided into two main groups - qualitative and quantitative techniques - which can further be classified into more detailed methods. Generally, qualitative techniques are preferable when comparing alternatives and quantitative techniques when cost relationships can be found. The main objective of this thesis was to develop a method on how to estimate costs of internally manufactured and commercial elevator landing doors. Because of the challenging product structure, the proposed cost estimation framework is developed under three different levels based on past cost information available. The framework consists of features from both qualitative and quantitative cost estimation techniques. The starting point for the whole cost estimation process is an unambiguous, hierarchical product structure so that the product can be classified into controllable parts and is then easier to handle. Those controllable parts can then be compared to existing past cost knowledge of similar parts and create as accurate cost estimates as possible by that way.
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
In the current economy situation companies try to reduce their expenses. One of the solutions is to improve the energy efficiency of the processes. It is known that the energy consumption of pumping applications range from 20 up to 50% of the energy usage in the certain industrial plants operations. Some studies have shown that 30% to 50% of energy consumed by pump systems could be saved by changing the pump or the flow control method. The aim of this thesis is to create a mobile measurement system that can calculate a working point position of a pump drive. This information can be used to determine the efficiency of the pump drive operation and to develop a solution to bring pump’s efficiency to a maximum possible value. This can allow a great reduction in the pump drive’s life cycle cost. In the first part of the thesis, a brief introduction in the details of pump drive operation is given. Methods that can be used in the project are presented. Later, the review of available platforms for the project implementation is given. In the second part of the thesis, components of the project are presented. Detailed description for each created component is given. Finally, results of laboratory tests are presented. Acquired results are compared and analyzed. In addition, the operation of created system is analyzed and suggestions for the future development are given.
Resumo:
The aim of this master’s thesis is to develop an algorithm to calculate the cable network for heat and power station CHGRES. This algorithm includes important aspect which has an influence on the cable network reliability. Moreover, according to developed algorithm, the optimal solution for modernization cable system from economical and technical point of view was obtained. The conditions of existing cable lines show that replacement is necessary. Otherwise, the fault situation would happen. In this case company would loss not only money but also its prestige. As a solution, XLPE single core cables are more profitable than other types of cable considered in this work. Moreover, it is presented the dependence of value of short circuit current on number of 10/110 kV transformers connected in parallel between main grid and considered 10 kV busbar and how it affects on final decision. Furthermore, the losses of company in power (capacity) market due to fault situation are presented. These losses are commensurable with investment to replace existing cable system.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
This study investigates futures market efficiency and optimal hedge ratio estimation. First, cointegration between spot and futures prices is studied using Johansen method, with two different model specifications. If prices are found cointegrated, restrictions on cointegrating vector and adjustment coefficients are imposed, to account for unbiasedness, weak exogeneity and prediction hypothesis. Second, optimal hedge ratios are estimated using static OLS, and time-varying DVEC and CCC models. In-sample and out-of-sample results for one, two and five period ahead are reported. The futures used in thesis are RTS index, EUR/RUB exchange rate and Brent oil, traded in Futures and options on RTS.(FORTS) For in-sample period, data points were acquired from start of trading of each futures contract, RTS index from August 2005, EUR/RUB exchange rate March 2009 and Brent oil October 2008, lasting till end of May 2011. Out-of-sample period covers start of June 2011, till end of December 2011. Our results indicate that all three asset pairs, spot and futures, are cointegrated. We found RTS index futures to be unbiased predictor of spot price, mixed evidence for exchange rate, and for Brent oil futures unbiasedness was not supported. Weak exogeneity results for all pairs indicated spot price to lead in price discovery process. Prediction hypothesis, unbiasedness and weak exogeneity of futures, was rejected for all asset pairs. Variance reduction results varied between assets, in-sample in range of 40-85 percent and out-of sample in range of 40-96 percent. Differences between models were found small, except for Brent oil in which OLS clearly dominated. Out-of-sample results indicated exceptionally high variance reduction for RTS index, approximately 95 percent.
Resumo:
Target company of this study is a large machinery company, which is, inter alia, engaged in energy and pulp engineering, procurement and construction management (EPCM) supply business. The main objective of this study was to develop cost estimation of the target company by providing more accurate, reliable and up-to-date information through enterprise resource planning (ERP) system. Another objective was to find cost-effective methods to collect total cost of ownership information to support more informed supplier selection decision making. This study is primarily action-oriented, but also constructive, and it can be divided in two sections: theoretical literature review and empirical study on the abovementioned part of the target company’s business. Development of information collection is, in addition to literature review, based on nearly 30 qualitative interviews of employees at various organizational units, functions and levels at the target company. At the core of development was to make initial data more accurate, reliable and available, a necessary prerequisite for informed use of the information. Certain development suggestions and paths were presented in order to regain confidence in ERP system as information source by reorganizing work breakdown structure and by complementing mere cost information with quantitative, technical and scope information. Several methods to use the information ever more effectively were also discussed. While implementation of the development suggestions outreached the scope of this study, it was forwarded in test environment and interest groups.
Resumo:
Bone strain plays a major role as the activation signal for the bone (re)modeling process, which is vital for keeping bones healthy. Maintaining high bone mineral density reduces the chances of fracture in the event of an accident. Numerous studies have shown that bones can be strengthened with physical exercise. Several hypotheses have asserted that a stronger osteogenic (bone producing) effect results from dynamic exercise than from static exercise. These previous studies are based on short-term empirical research, which provide the motivation for justifying the experimental results with a solid mathematical background. The computer simulation techniques utilized in this work allow for non-invasive bone strain estimation during physical activity at any bone site within the human skeleton. All models presented in the study are threedimensional and actuated by muscle models to replicate the real conditions accurately. The objective of this work is to determine and present loading-induced bone strain values resulting from physical activity. It includes a comparison of strain resulting from four different gym exercises (knee flexion, knee extension, leg press, and squat) and walking, with the results reported for walking and jogging obtained from in-vivo measurements described in the literature. The objective is realized primarily by carrying out flexible multibody dynamics computer simulations. The dissertation combines the knowledge of finite element analysis and multibody simulations with experimental data and information available from medical field literature. Measured subject-specific motion data was coupled with forward dynamics simulation to provide natural skeletal movement. Bone geometries were defined using a reverse engineering approach based on medical imaging techniques. Both computed tomography and magnetic resonance imaging were utilized to explore modeling differences. The predicted tibia bone strains during walking show good agreement with invivo studies found in the literature. Strain measurements were not available for gym exercises; therefore, the strain results could not be validated. However, the values seem reasonable when compared to available walking and running invivo strain measurements. The results can be used for exercise equipment design aimed at strengthening the bones as well as the muscles during workout. Clinical applications in post fracture recovery exercising programs could also be the target. In addition, the methodology introduced in this study, can be applied to investigate the effect of weightlessness on astronauts, who often suffer bone loss after long time spent in the outer space.