989 resultados para Calculation-based
Resumo:
Työn tavoitteena oli kuvata ja ottaa käyttöön sahauseräkohtaisen kannattavuuden laskentamenetelmä sahalle, sekä tehdä laskentamalli menetelmän tueksi. Sahauksen peruskäsitteiden jälkeen työssä on esitelty sahan tuotantoprosessi. Tuotantoprosessi on kuvattu kirjallisuuden ja asiantuntijoiden haastattelujen perusteella. Seuraavaksi kartoitettiin hyötyjä ja vaikutuksia, mitä laskentamenetelmältä odotetaan.. Kustannuslaskennan teoriaa selvitettiin kirjallisuuslähteitä käyttäen silmälläpitäen juuri tätä kehitettävää laskentamenetelmää. Lisäksi esiteltiin Uimaharjun sahalla käytettävät ja laskentaan liittyvät laskenta- ja tietojärjestelmät.Nykyisin sahalla ei ole minkäänlaista menetelmää sahauseräkohtaisen tuloksen laskemiseksi. Pienillä muutoksilla sahan tietojärjestelmään ja prosessikoneisiin voidaan sahauserä kuljettaa prosessin läpi niin, että jokaisessa prosessin vaiheessa sille saadaan kohdistettua tuotantotietoa. Eri vaiheista saatua tietoa käyttämällä saadaan tarkasti määritettyä tuotteet, joita sahauserä tuotti ja paljonko tuotantoresursseja tuottamiseen kului. Laskentamalliin syötetään tuotantotietoja ja kustannustietoa ja saadaan vastaukseksi sahauserän taloudellinen tulos.Toimenpide ehdotuksena esitetään lisätutkimusta tuotantotietojen automaattisesta keräämisestä manuaalisen työn ja virheiden poistamiseksi. Suhteellisen pienillä panoksilla voidaan jokaiselle sahauserälle kerätä tuotantotiedot täysin automaattisesti. Lisäksi kehittämäni laskentamallin tilalle tulisi hankkia sovellus, joka käyttäisi paremmin hyväksi nykyisiä tietojärjestelmiä ja poistaisi manuaalisen työvaiheen laskennassa.
Resumo:
Within the latest decade high-speed motor technology has been increasingly commonly applied within the range of medium and large power. More particularly, applications like such involved with gas movement and compression seem to be the most important area in which high-speed machines are used. In manufacturing the induction motor rotor core of one single piece of steel it is possible to achieve an extremely rigid rotor construction for the high-speed motor. In a mechanical sense, the solid rotor may be the best possible rotor construction. Unfortunately, the electromagnetic properties of a solid rotor are poorer than the properties of the traditional laminated rotor of an induction motor. This thesis analyses methods for improving the electromagnetic properties of a solid-rotor induction machine. The slip of the solid rotor is reduced notably if the solid rotor is axially slitted. The slitting patterns of the solid rotor are examined. It is shown how the slitting parameters affect the produced torque. Methods for decreasing the harmonic eddy currents on the surface of the rotor are also examined. The motivation for this is to improve the efficiency of the motor to reach the efficiency standard of a laminated rotor induction motor. To carry out these research tasks the finite element analysis is used. An analytical calculation of solid rotors based on the multi-layer transfer-matrix method is developed especially for the calculation of axially slitted solid rotors equipped with wellconducting end rings. The calculation results are verified by using the finite element analysis and laboratory measurements. The prototype motors of 250 – 300 kW and 140 Hz were tested to verify the results. Utilization factor data are given for several other prototypes the largest of which delivers 1000 kW at 12000 min-1.
Resumo:
This thesis analyses the calculation of FanSave and PumpSave energy saving tools calculation. With these programs energy consumption of variable speed drive control for fans and pumps can be compared to other control methods. With FanSave centrifugal and axial fans can be examined and PumpSave deals with centrifugal pumps. By means of these programs also suitable frequency converter can be chosen from the ABB collection. Programs need as initial values information about the appliances like amount of flow and efficiencies. Operation time is important factor when calculating the annual energy consumption and information about it are the length and profile. Basic theory related to fans and pumps is introduced without more precise instructions for dimensioning. FanSave and PumpSave contain various methods for flow control. These control methods are introduced in the thesis based on their operational principles and suitability. Also squirrel cage motor and frequency converter are introduced because of their close involvement to fans and pumps. Second part of the thesis contains comparison between results of FanSave’s and PumpSave’s calculation and performance curve based calculation. Also laboratory tests were made with centrifugal and axial fan and also with centrifugal pump. With the results from this thesis the calculation of these programs can be adjusted to be more accurate and also some new features can be added.
Resumo:
In this work we present the formulas for the calculation of exact three-center electron sharing indices (3c-ESI) and introduce two new approximate expressions for correlated wave functions. The 3c-ESI uses the third-order density, the diagonal of the third-order reduced density matrix, but the approximations suggested in this work only involve natural orbitals and occupancies. In addition, the first calculations of 3c-ESI using Valdemoro's, Nakatsuji's and Mazziotti's approximation for the third-order reduced density matrix are also presented for comparison. Our results on a test set of molecules, including 32 3c-ESI values, prove that the new approximation based on the cubic root of natural occupancies performs the best, yielding absolute errors below 0.07 and an average absolute error of 0.015. Furthemore, this approximation seems to be rather insensitive to the amount of electron correlation present in the system. This newly developed methodology provides a computational inexpensive method to calculate 3c-ESI from correlated wave functions and opens new avenues to approximate high-order reduced density matrices in other contexts, such as the contracted Schrödinger equation and the anti-Hermitian contracted Schrödinger equation
Resumo:
The aim of this project is to accomplish an application software based on Matlab to calculate the radioelectrical coverage by surface wave of broadcast radiostations in the band of Medium Wave (WM) all around the world. Also, given the location of a transmitting and a receiving station, the software should be able to calculate the electric field that the receiver should receive at that specific site. In case of several transmitters, the program should search for the existence of Inter-Symbol Interference, and calculate the field strenght accordingly. The application should ask for the configuration parameters of the transmitter radiostation within a Graphical User Interface (GUI), and bring back the resulting coverage above a map of the area under study. For the development of this project, it has been used several conductivity databases of different countries, and a high-resolution elevation database (GLOBE). Also, to calculate the field strenght due to groundwave propagation, it has been used ITU GRWAVE program, which must be integrated into a Matlab interface to be used by the application developed.
Resumo:
The purpose of this thesis was to define how product carbon footprint analysis and its results can be used in company's internal development as well as in customer and interest group guidance, and how these factors are related to corporate social responsibility. From-cradle-to-gate carbon footprint was calculated for three products; Torino Whole grain barley, Torino Pearl barley, and Elovena Barley grit & oat bran, all of them made of Finnish barley. The carbon footprint of the Elovena product was used to determine carbon footprints for industrial kitchen cooked porridge portions. The basic calculation data was collected from several sources. Most of the data originated from Raisio Group's contractual farmers and Raisio Group's cultivation, processing and packaging specialists. Data from national and European literature and database sources was also used. The electricity consumption for porridge portions' carbon footprint calculations was determined with practical measurements. The carbon footprint calculations were conducted according to the ISO 14044 standard, and the PAS 2050 guide was also applied. A consequential functional unit was applied in porridge portions' carbon footprint calculations. Most of the emissions from barley products' life cycle originate from primary production. The nitrous oxide emissions from cultivated soil and the use and production of nitrogenous fertilisers contribute over 50% of products' carbon footprint. Torino Pearl barley has the highest carbon footprint due to the lowest processing output. The reductions in products' carbon footprint can be achieved with developments in cultivation and grain processing. The carbon footprint of porridge portion can be reduced by using domestically produced plant-based ingredients and by making the best possible use of the kettle. Carbon footprint calculation can be used to determine possible improvement points related to corporate environmental responsibility. Several improvement actions are related to economical and social responsibility through better raw material utilization and expense reductions.
Resumo:
Efficient designs and operations of water and wastewater treatment systems are largely based on mathematical calculations. This even applies to training in the treatment systems. Therefore, it is necessary that calculation procedures are developed and computerised a priori for such applications to ensure effectiveness. This work was aimed at developing calculation procedures for gas stripping, depth filtration, ion exchange, chemical precipitation, and ozonation wastewater treatment technologies to include them in ED-WAVE, a portable computer based tool used in design, operations and training in wastewater treatment. The work involved a comprehensive online and offline study of research work and literature, and application of practical case studies to generate ED-WAVE compatible representations of the treatment technologies which were then uploaded into the tool.
Resumo:
It is presented a software developed with Delphi programming language to compute the reservoir's annual regulated active storage, based on the sequent-peak algorithm. Mathematical models used for that purpose generally require extended hydrological series. Usually, the analysis of those series is performed with spreadsheets or graphical representations. Based on that, it was developed a software for calculation of reservoir active capacity. An example calculation is shown by 30-years (from 1977 to 2009) monthly mean flow historical data, from Corrente River, located at São Francisco River Basin, Brazil. As an additional tool, an interface was developed to manage water resources, helping to manipulate data and to point out information that it would be of interest to the user. Moreover, with that interface irrigation districts where water consumption is higher can be analyzed as a function of specific seasonal water demands situations. From a practical application, it is possible to conclude that the program provides the calculation originally proposed. It was designed to keep information organized and retrievable at any time, and to show simulation on seasonal water demands throughout the year, contributing with the elements of study concerning reservoir projects. This program, with its functionality, is an important tool for decision making in the water resources management.
Resumo:
The use of intensity-modulated radiotherapy (IMRT) has increased extensively in the modern radiotherapy (RT) treatments over the past two decades. Radiation dose distributions can be delivered with higher conformality with IMRT when compared to the conventional 3D-conformal radiotherapy (3D-CRT). Higher conformality and target coverage increases the probability of tumour control and decreases the normal tissue complications. The primary goal of this work is to improve and evaluate the accuracy, efficiency and delivery techniques of RT treatments by using IMRT. This study evaluated the dosimetric limitations and possibilities of IMRT in small (treatments of head-and-neck, prostate and lung cancer) and large volumes (primitive neuroectodermal tumours). The dose coverage of target volumes and the sparing of critical organs were increased with IMRT when compared to 3D-CRT. The developed split field IMRT technique was found to be safe and accurate method in craniospinal irradiations. By using IMRT in simultaneous integrated boosting of biologically defined target volumes of localized prostate cancer high doses were achievable with only small increase in the treatment complexity. Biological plan optimization increased the probability of uncomplicated control on average by 28% when compared to standard IMRT delivery. Unfortunately IMRT carries also some drawbacks. In IMRT the beam modulation is realized by splitting a large radiation field to small apertures. The smaller the beam apertures are the larger the rebuild-up and rebuild-down effects are at the tissue interfaces. The limitations to use IMRT with small apertures in the treatments of small lung tumours were investigated with dosimetric film measurements. The results confirmed that the peripheral doses of the small lung tumours were decreased as the effective field size was decreased. The studied calculation algorithms were not able to model the dose deficiency of the tumours accurately. The use of small sliding window apertures of 2 mm and 4 mm decreased the tumour peripheral dose by 6% when compared to 3D-CRT treatment plan. A direct aperture based optimization (DABO) technique was examined as a solution to decrease the treatment complexity. The DABO IMRT technique was able to achieve treatment plans equivalent with the conventional IMRT fluence based optimization techniques in the concave head-and-neck target volumes. With DABO the effective field sizes were increased and the number of MUs was reduced with a factor of two. The optimality of a treatment plan and the therapeutic ratio can be further enhanced by using dose painting based on regional radiosensitivities imaged with functional imaging methods.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Virtual Testing of Active Magnetic Bearing Systems based on Design Guidelines given by the Standards
Resumo:
Active Magnetic Bearings offer many advantages that have brought new applications to the industry. However, similarly to all new technology, active magnetic bearings also have downsides and one of those is the low standardization level. This thesis is studying mainly the ISO 14839 standard and more specifically the system verification methods. These verifying methods are conducted using a practical test with an existing active magnetic bearing system. The system is simulated with Matlab using rotor-bearing dynamics toolbox, but this study does not include the exact simulation code or a direct algebra calculation. However, this study provides the proof that standardized simulation methods can be applied in practical problems.
Resumo:
Objective of the thesis is to create a value based pricing model for marine engines and study the feasibility of implementing such model in the sales organization of a specific segment in the case company’s marine division. Different pricing strategies, concept of “value”, and how perceptions of value can be influenced through value based marketing are presented as theoretical background for the value based pricing model. Forbis and Mehta’s Economic Value to Customer (EVC) was selected as framework to create the value based pricing model for marine engines. The EVC model is based on calculating and comparing life-cycle costs of the reference product and competing products, thus showing the quantifiable value of the company’s own product compared to competition. In the applied part of the thesis, the components of the EVC model are identified for a marine diesel engine, the components are explained, and an example calculation created in Excel is presented. When examining the possibilities to implement in practice a value based pricing strategy based on the EVC model, it was found that the lack of precise information on competing products is the single biggest obstacle to use EVC exactly as presented in the literature. It was also found that sometimes necessary communication channels are missing and that there is simply a lack of interest from some clients and product end-users part to spend time on studying the life-cycle costs of the product. Information on the company’s own products is however sufficient and the sales force is capable to communicate to sufficiently high executive levels in the client organizations. Therefore it is suggested to focus on quantifying and communicating the company’s own value proposition. The dynamic nature of the business environment (variance in applications in which engines are installed, different clients, competition, end-clients etc.) means also that each project should be created its own EVC calculation. This is demanding in terms of resources needed, thus it is suggested to concentrate on selected projects and buyers, and to clients where the necessary communication channels to right levels in the customer organization are available. Finally, it should be highlighted that as literature suggests, implementing a value based pricing strategy is not possible unless the whole business approach is value based.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.
Resumo:
The mathematical formulation of empirically developed formulas Jirr the calculation of the resonant frequency of a thick-substrate (h s 0.08151 A,,) microstrip antenna has been analyzed. With the use qt' tunnel-based artificial neural networks (ANNs), the resonant frequency of antennas with h satisfying the thick-substrate condition are calculated and compared with the existing experimental results and also with the simulation results obtained with the use of an IE3D software package. The artificial neural network results are in very good agreement with the experimental results