152 resultados para Collocation methods
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
During a possible loss of coolant accident in BWRs, a large amount of steam will be released from the reactor pressure vessel to the suppression pool. Steam will be condensed into the suppression pool causing dynamic and structural loads to the pool. The formation and break up of bubbles can be measured by visual observation using a suitable pattern recognition algorithm. The aim of this study was to improve the preliminary pattern recognition algorithm, developed by Vesa Tanskanen in his doctoral dissertation, by using MATLAB. Video material from the PPOOLEX test facility, recorded during thermal stratification and mixing experiments, was used as a reference in the development of the algorithm. The developed algorithm consists of two parts: the pattern recognition of the bubbles and the analysis of recognized bubble images. The bubble recognition works well, but some errors will appear due to the complex structure of the pool. The results of the image analysis were reasonable. The volume and the surface area of the bubbles were not evaluated. Chugging frequencies calculated by using FFT fitted well into the results of oscillation frequencies measured in the experiments. The pattern recognition algorithm works in the conditions it is designed for. If the measurement configuration will be changed, some modifications have to be done. Numerous improvements are proposed for the future 3D equipment.
Resumo:
Multiple sclerosis (MS) is a chronic immune-mediated inflammatory disorder of the central nervous system. MS is the most common disabling central nervous system (CNS) disease of young adults in the Western world. In Finland, the prevalence of MS ranges between 1/1000 and 2/1000 in different areas. Fabry disease (FD) is a rare hereditary metabolic disease due to mutation in a single gene coding α-galactosidase A (alpha-gal A) enzyme. It leads to multi-organ pathology, including cerebrovascular disease. Currently there are 44 patients with diagnosed FD in Finland. Magnetic resonance imaging (MRI) is commonly used in the diagnostics and follow-up of these diseases. The disease activity can be demonstrated by occurrence of new or Gadolinium (Gd)-enhancing lesions in routine studies. Diffusion-weighted imaging (DWI) and diffusion tensor imaging (DTI) are advanced MR sequences which can reveal pathologies in brain regions which appear normal on conventional MR images in several CNS diseases. The main focus in this study was to reveal whether whole brain apparent diffusion coefficient (ADC) analysis can be used to demonstrate MS disease activity. MS patients were investigated before and after delivery and before and after initiation of diseasemodifying treatment (DMT). In FD, DTI was used to reveal possible microstructural alterations at early timepoints when excessive signs of cerebrovascular disease are not yet visible in conventional MR sequences. Our clinical and MRI findings at 1.5T indicated that post-partum activation of the disease is an early and common phenomenon amongst mothers with MS. MRI seems to be a more sensitive method for assessing MS disease activity than the recording of relapses. However, whole brain ADC histogram analysis is of limited value in the follow-up of inflammatory conditions in a pregnancy-related setting because the pregnancy-related physiological effects on ADC overwhelm the alterations in ADC associated with MS pathology in brain tissue areas which appear normal on conventional MRI sequences. DTI reveals signs of microstructural damage in brain white matter of FD patients before excessive white matter lesion load can be observed on conventional MR scans. DTI could offer a valuable tool for monitoring the possible effects of enzyme replacement therapy in FD.
Resumo:
Työn tarkoituksena oli kehittää analyyttinen erotusmenetelmä eräässä valmistusprosessissa käytettävän hapettavan aineen ja liuottimen välillä syntyvien reaktiotuotteiden tutkimiseen ja analysoimiseen. Lisäksi tarkoituksena oli tutkia prosessiolosuhteiden turvallisuutta. Kirjallisuusosassa käsitellään erilaisia orgaanisia peroksideja, niiden käyttötarkoituksia ja niiden käyttöön liittyviä huomioitavia asioita. Lisäksi tarkastellaan yleisimpiä analyysimenetelmiä, joita on käytetty erilaisten peroksidien analysoinnissa. Näitä analyysimenetelmiä on useimmiten käytetty nestemäisten näytteiden tutkimuksissa. Harvemmin on analysoitu kaasu- ja kiintoainenäytteitä. Kokeellisessa osassa kehitettiin kirjallisuuden perusteella peroksidiyhdisteille identifiointimenetelmä ja tutkittiin prosessin näytteet. Analyysimenetelmiksi valittiin iodometrinen titraus ja HPLC-UV-MS-menetelmä. Lisäksi käytettiin peroksidimittaukseen soveltuvia testiliuskoja. Tutkimus osoitti, että iodometrisen titrauksen ja testiliuskojen perusteella näytteissä oli vähäisiä määriä peroksideja viikon jälkeen peroksidilisäyksestä. HPLC-UV-MS-analyysien perusteella näytteiden analysointia häiritsi selluloosa, jota löytyi jokaisesta näytteestä.
Resumo:
Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.
Resumo:
The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.
Resumo:
Stochastic approximation methods for stochastic optimization are considered. Reviewed the main methods of stochastic approximation: stochastic quasi-gradient algorithm, Kiefer-Wolfowitz algorithm and adaptive rules for them, simultaneous perturbation stochastic approximation (SPSA) algorithm. Suggested the model and the solution of the retailer's profit optimization problem and considered an application of the SQG-algorithm for the optimization problems with objective functions given in the form of ordinary differential equation.
Resumo:
Protein engineering aims to improve the properties of enzymes and affinity reagents by genetic changes. Typical engineered properties are affinity, specificity, stability, expression, and solubility. Because proteins are complex biomolecules, the effects of specific genetic changes are seldom predictable. Consequently, a popular strategy in protein engineering is to create a library of genetic variants of the target molecule, and render the population in a selection process to sort the variants by the desired property. This technique, called directed evolution, is a central tool for trimming protein-based products used in a wide range of applications from laundry detergents to anti-cancer drugs. New methods are continuously needed to generate larger gene repertoires and compatible selection platforms to shorten the development timeline for new biochemicals. In the first study of this thesis, primer extension mutagenesis was revisited to establish higher quality gene variant libraries in Escherichia coli cells. In the second study, recombination was explored as a method to expand the number of screenable enzyme variants. A selection platform was developed to improve antigen binding fragment (Fab) display on filamentous phages in the third article and, in the fourth study, novel design concepts were tested by two differentially randomized recombinant antibody libraries. Finally, in the last study, the performance of the same antibody repertoire was compared in phage display selections as a genetic fusion to different phage capsid proteins and in different antibody formats, Fab vs. single chain variable fragment (ScFv), in order to find out the most suitable display platform for the library at hand. As a result of the studies, a novel gene library construction method, termed selective rolling circle amplification (sRCA), was developed. The method increases mutagenesis frequency close to 100% in the final library and the number of transformants over 100-fold compared to traditional primer extension mutagenesis. In the second study, Cre/loxP recombination was found to be an appropriate tool to resolve the DNA concatemer resulting from error-prone RCA (epRCA) mutagenesis into monomeric circular DNA units for higher efficiency transformation into E. coli. Library selections against antigens of various size in the fourth study demonstrated that diversity placed closer to the antigen binding site of antibodies supports generation of antibodies against haptens and peptides, whereas diversity at more peripheral locations is better suited for targeting proteins. The conclusion from a comparison of the display formats was that truncated capsid protein three (p3Δ) of filamentous phage was superior to the full-length p3 and protein nine (p9) in obtaining a high number of uniquely specific clones. Especially for digoxigenin, a difficult hapten target, the antibody repertoire as ScFv-p3Δ provided the clones with the highest affinity for binding. This thesis on the construction, design, and selection of gene variant libraries contributes to the practical know-how in directed evolution and contains useful information for scientists in the field to support their undertakings.
Resumo:
This study focused on identifying various system boundaries and evaluating methods of estimating energy performance of biogas production. First, the output-input ratio method used for evaluating energy performance from the system boundaries was reviewed. Secondly, ways to assess the efficiency of biogas use and parasitic energy demand were investigated. Thirdly, an approach for comparing biogas production to other energy production methods was evaluated. Data from an existing biogas plant, located in Finland, was used for the evaluation of the methods. The results indicate that calculating and comparing the output-input ratios (Rpr1, Rpr2, Rut, Rpl and Rsy) can be used in evaluating the performance of biogas production system. In addition, the parasitic energy demand calculations (w) and the efficiency of utilizing produced biogas (η) provide detailed information on energy performance of the biogas plant. Furthermore, Rf and energy output in relation to total solid mass of feedstock (FO/TS) are useful in comparing biogas production with other energy recovery technologies. As a conclusion it is essential for the comparability of biogas plants that their energy performance would be calculated in a more consistent manner in the future.
Resumo:
In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.
Resumo:
Today lean-philosophy has gathered a lot of popularity and interest in many industries. This customer-oriented philosophy helps to understand customer’s value creation which can be used to improve efficiency. A comprehensive study of lean and lean-methods in service industry were created in this research. In theoretical part lean-philosophy is studied in different levels which will help to understand its diversity. To support lean, this research also presents basic concepts of process management. Lastly theoretical part presents a development model to support process development in systematical way. The empirical part of the study was performed by performing experimental measurements during the service center’s product return process and by analyzing this data. Measurements were used to map out factors that have a negative influence on the process flow. Several development propositions were discussed to remove these factors. Problems mainly occur due to challenges in controlling customers and due to the lack of responsibility and continuous improvement on operational level. Development propositions concern such factors as change in service center’s physical environment, standardization of work tasks and training. These factors will remove waste in the product return process and support the idea of continuous improvement.
Resumo:
The purpose of this study was to explore software development methods and quality assurance practices used by South Korean software industry. Empirical data was collected by conducting a survey that focused on three main parts: software life cycle models and methods, software quality assurance including quality standards, the strengths and weaknesses of South Korean software industry. The results of the completed survey showed that the use of agile methods is slightly surpassing the use of traditional software development methods. The survey also revealed an interesting result that almost half of the South Korean companies do not use any software quality assurance plan in their projects. For the state of South Korean software industry large number of the respondents thought that despite of the weakness, the status of software development in South Korea will improve in the future.
Resumo:
The purpose of this thesis was to study the design of demand forecasting processes and management of demand. In literature review were different processes found and forecasting methods and techniques interviewed. Also role of bullwhip effect in supply chain was identified and how to manage it with information sharing operations. In the empirical part of study is at first described current situation and challenges in case company. After that will new way to handle demand introduced with target budget creation and how information sharing with 5 products and a few customers would bring benefits to company. Also the new S&OP process created within this study and organization for it.
Resumo:
In today’s world because of the rapid advancement in the field of technology and business, the requirements are not clear, and they are changing continuously in the development process. Due to those changes in the requirements the software development becomes very difficult. Use of traditional software development methods such as waterfall method is not a good option, as the traditional software development methods are not flexible to requirements and the software can be late and over budget. For developing high quality software that satisfies the customer, the organizations can use software development methods, such as agile methods which are flexible to change requirements at any stage in the development process. The agile methods are iterative and incremental methods that can accelerate the delivery of the initial business values through the continuous planning and feedback, and there is close communication between the customer and developers. The main purpose of the current thesis is to find out the problems in traditional software development and to show how agile methods reduced those problems in software development. The study also focuses the different success factors of agile methods, the success rate of agile projects and comparison between traditional and agile software development.
Resumo:
Presentation at "Soome-ugri keelte andmebaasid ja e-leksikograafia" at Eesti Keele Instituut (Institution of Estonian Languages) in Tallnn on the 18th of November 2014.