129 resultados para Modal methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Frequency converters are widely used in the industry to enable better controllability and efficiency of variable speed AC motor drives. Despite these advantages, certain challenges concerning the inverter and motor interfacing have been present for decades. As insulated gate bipolar transistors entered the market, the inverter output voltage transition rate significantly increased compared with their predecessors. Inverters operate based on pulse width modulation of the output voltage, and the steep voltage edge fed by the inverter produces a motor terminal overvoltage. The overvoltage causes extra stress to the motor insulation, which may lead to a prematuremotor failure. The overvoltage is not generated by the inverter alone, but also by the sum effect of the motor cable length and the impedance mismatch between the cable and the motor. Many solutions have been shown to limit the overvoltage, and the mainstream products focus on passive filters. This doctoral thesis studies an alternative methodology for motor overvoltage reduction. The focus is on minimization of the passive filter dimensions, physical and electrical, or better yet, on operation without any filter. This is achieved by additional inverter control and modulation. The studied methods are implemented on different inverter topologies, varying in nominal voltage and current.For two-level inverters, the studied method is termed active du/dt. It consists of a small output LC filter, which is controlled by an independent modulator. The overvoltage is limited by a reduced voltage transition rate. For multilevel inverters, an overvoltage mitigation method operating without a passive filter, called edge modulation, is implemented. The method uses the capability of the inverter to produce two switching operations in the same direction to cancel the oscillating voltages of opposite phases. For parallel inverters, two methods are studied. They are both intended for two-level inverters, but the first uses individual motor cables from each inverter while the other topology applies output inductors. The overvoltage is reduced by interleaving the switching operations to produce a similar oscillation accumulation as with the edge modulation. The implementation of these methods is discussed in detail, and the necessary modifications to the control system of the inverter are presented. Each method is experimentally verified by operating industrial frequency converters with the modified control. All the methods are found feasible, and they provide sufficient overvoltage protection. The limitations and challenges brought about by the methods are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge of the behaviour of cellulose, hemicelluloses, and lignin during wood and pulp processing is essential for understanding and controlling the processes. Determination of monosaccharide composition gives information about the structural polysaccharide composition of wood material and helps when determining the quality of fibrous products. In addition, monitoring of the acidic degradation products gives information of the extent of degradation of lignin and polysaccharides. This work describes two capillary electrophoretic methods developed for the analysis of monosaccharides and for the determination of aliphatic carboxylic acids from alkaline oxidation solutions of lignin and wood. Capillary electrophoresis (CE), in its many variants is an alternative separation technique to chromatographic methods. In capillary zone electrophoresis (CZE) the fused silica capillary is filled with an electrolyte solution. An applied voltage generates a field across the capillary. The movement of the ions under electric field is based on the charge and hydrodynamic radius of ions. Carbohydrates contain hydroxyl groups that are ionised only in strongly alkaline conditions. After ionisation, the structures are suitable for electrophoretic analysis and identification through either indirect UV detection or electrochemical detection. The current work presents a new capillary zone electrophoretic method, relying on in-capillary reaction and direct UV detection at the wavelength of 270 nm. The method has been used for the simultaneous separation of neutral carbohydrates, including mono- and disaccharides and sugar alcohols. The in-capillary reaction produces negatively charged and UV-absorbing compounds. The optimised method was applied to real samples. The methodology is fast since no other sample preparation, except dilution, is required. A new method for aliphatic carboxylic acids in highly alkaline process liquids was developed. The goal was to develop a method for the simultaneous analysis of the dicarboxylic acids, hydroxy acids and volatile acids that are oxidation and degradation products of lignin and wood polysaccharides. The CZE method was applied to three process cases. First, the fate of lignin under alkaline oxidation conditions was monitored by determining the level of carboxylic acids from process solutions. In the second application, the degradation of spruce wood using alkaline and catalysed alkaline oxidation were compared by determining carboxylic acids from the process solutions. In addition, the effectiveness of membrane filtration and preparative liquid chromatography in the enrichment of hydroxy acids from black liquor was evaluated, by analysing the effluents with capillary electrophoresis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During a possible loss of coolant accident in BWRs, a large amount of steam will be released from the reactor pressure vessel to the suppression pool. Steam will be condensed into the suppression pool causing dynamic and structural loads to the pool. The formation and break up of bubbles can be measured by visual observation using a suitable pattern recognition algorithm. The aim of this study was to improve the preliminary pattern recognition algorithm, developed by Vesa Tanskanen in his doctoral dissertation, by using MATLAB. Video material from the PPOOLEX test facility, recorded during thermal stratification and mixing experiments, was used as a reference in the development of the algorithm. The developed algorithm consists of two parts: the pattern recognition of the bubbles and the analysis of recognized bubble images. The bubble recognition works well, but some errors will appear due to the complex structure of the pool. The results of the image analysis were reasonable. The volume and the surface area of the bubbles were not evaluated. Chugging frequencies calculated by using FFT fitted well into the results of oscillation frequencies measured in the experiments. The pattern recognition algorithm works in the conditions it is designed for. If the measurement configuration will be changed, some modifications have to be done. Numerous improvements are proposed for the future 3D equipment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple sclerosis (MS) is a chronic immune-mediated inflammatory disorder of the central nervous system. MS is the most common disabling central nervous system (CNS) disease of young adults in the Western world. In Finland, the prevalence of MS ranges between 1/1000 and 2/1000 in different areas. Fabry disease (FD) is a rare hereditary metabolic disease due to mutation in a single gene coding α-galactosidase A (alpha-gal A) enzyme. It leads to multi-organ pathology, including cerebrovascular disease. Currently there are 44 patients with diagnosed FD in Finland. Magnetic resonance imaging (MRI) is commonly used in the diagnostics and follow-up of these diseases. The disease activity can be demonstrated by occurrence of new or Gadolinium (Gd)-enhancing lesions in routine studies. Diffusion-weighted imaging (DWI) and diffusion tensor imaging (DTI) are advanced MR sequences which can reveal pathologies in brain regions which appear normal on conventional MR images in several CNS diseases. The main focus in this study was to reveal whether whole brain apparent diffusion coefficient (ADC) analysis can be used to demonstrate MS disease activity. MS patients were investigated before and after delivery and before and after initiation of diseasemodifying treatment (DMT). In FD, DTI was used to reveal possible microstructural alterations at early timepoints when excessive signs of cerebrovascular disease are not yet visible in conventional MR sequences. Our clinical and MRI findings at 1.5T indicated that post-partum activation of the disease is an early and common phenomenon amongst mothers with MS. MRI seems to be a more sensitive method for assessing MS disease activity than the recording of relapses. However, whole brain ADC histogram analysis is of limited value in the follow-up of inflammatory conditions in a pregnancy-related setting because the pregnancy-related physiological effects on ADC overwhelm the alterations in ADC associated with MS pathology in brain tissue areas which appear normal on conventional MRI sequences. DTI reveals signs of microstructural damage in brain white matter of FD patients before excessive white matter lesion load can be observed on conventional MR scans. DTI could offer a valuable tool for monitoring the possible effects of enzyme replacement therapy in FD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Työn tarkoituksena oli kehittää analyyttinen erotusmenetelmä eräässä valmistusprosessissa käytettävän hapettavan aineen ja liuottimen välillä syntyvien reaktiotuotteiden tutkimiseen ja analysoimiseen. Lisäksi tarkoituksena oli tutkia prosessiolosuhteiden turvallisuutta. Kirjallisuusosassa käsitellään erilaisia orgaanisia peroksideja, niiden käyttötarkoituksia ja niiden käyttöön liittyviä huomioitavia asioita. Lisäksi tarkastellaan yleisimpiä analyysimenetelmiä, joita on käytetty erilaisten peroksidien analysoinnissa. Näitä analyysimenetelmiä on useimmiten käytetty nestemäisten näytteiden tutkimuksissa. Harvemmin on analysoitu kaasu- ja kiintoainenäytteitä. Kokeellisessa osassa kehitettiin kirjallisuuden perusteella peroksidiyhdisteille identifiointimenetelmä ja tutkittiin prosessin näytteet. Analyysimenetelmiksi valittiin iodometrinen titraus ja HPLC-UV-MS-menetelmä. Lisäksi käytettiin peroksidimittaukseen soveltuvia testiliuskoja. Tutkimus osoitti, että iodometrisen titrauksen ja testiliuskojen perusteella näytteissä oli vähäisiä määriä peroksideja viikon jälkeen peroksidilisäyksestä. HPLC-UV-MS-analyysien perusteella näytteiden analysointia häiritsi selluloosa, jota löytyi jokaisesta näytteestä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic approximation methods for stochastic optimization are considered. Reviewed the main methods of stochastic approximation: stochastic quasi-gradient algorithm, Kiefer-Wolfowitz algorithm and adaptive rules for them, simultaneous perturbation stochastic approximation (SPSA) algorithm. Suggested the model and the solution of the retailer's profit optimization problem and considered an application of the SQG-algorithm for the optimization problems with objective functions given in the form of ordinary differential equation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protein engineering aims to improve the properties of enzymes and affinity reagents by genetic changes. Typical engineered properties are affinity, specificity, stability, expression, and solubility. Because proteins are complex biomolecules, the effects of specific genetic changes are seldom predictable. Consequently, a popular strategy in protein engineering is to create a library of genetic variants of the target molecule, and render the population in a selection process to sort the variants by the desired property. This technique, called directed evolution, is a central tool for trimming protein-based products used in a wide range of applications from laundry detergents to anti-cancer drugs. New methods are continuously needed to generate larger gene repertoires and compatible selection platforms to shorten the development timeline for new biochemicals. In the first study of this thesis, primer extension mutagenesis was revisited to establish higher quality gene variant libraries in Escherichia coli cells. In the second study, recombination was explored as a method to expand the number of screenable enzyme variants. A selection platform was developed to improve antigen binding fragment (Fab) display on filamentous phages in the third article and, in the fourth study, novel design concepts were tested by two differentially randomized recombinant antibody libraries. Finally, in the last study, the performance of the same antibody repertoire was compared in phage display selections as a genetic fusion to different phage capsid proteins and in different antibody formats, Fab vs. single chain variable fragment (ScFv), in order to find out the most suitable display platform for the library at hand. As a result of the studies, a novel gene library construction method, termed selective rolling circle amplification (sRCA), was developed. The method increases mutagenesis frequency close to 100% in the final library and the number of transformants over 100-fold compared to traditional primer extension mutagenesis. In the second study, Cre/loxP recombination was found to be an appropriate tool to resolve the DNA concatemer resulting from error-prone RCA (epRCA) mutagenesis into monomeric circular DNA units for higher efficiency transformation into E. coli. Library selections against antigens of various size in the fourth study demonstrated that diversity placed closer to the antigen binding site of antibodies supports generation of antibodies against haptens and peptides, whereas diversity at more peripheral locations is better suited for targeting proteins. The conclusion from a comparison of the display formats was that truncated capsid protein three (p3Δ) of filamentous phage was superior to the full-length p3 and protein nine (p9) in obtaining a high number of uniquely specific clones. Especially for digoxigenin, a difficult hapten target, the antibody repertoire as ScFv-p3Δ provided the clones with the highest affinity for binding. This thesis on the construction, design, and selection of gene variant libraries contributes to the practical know-how in directed evolution and contains useful information for scientists in the field to support their undertakings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study focused on identifying various system boundaries and evaluating methods of estimating energy performance of biogas production. First, the output-input ratio method used for evaluating energy performance from the system boundaries was reviewed. Secondly, ways to assess the efficiency of biogas use and parasitic energy demand were investigated. Thirdly, an approach for comparing biogas production to other energy production methods was evaluated. Data from an existing biogas plant, located in Finland, was used for the evaluation of the methods. The results indicate that calculating and comparing the output-input ratios (Rpr1, Rpr2, Rut, Rpl and Rsy) can be used in evaluating the performance of biogas production system. In addition, the parasitic energy demand calculations (w) and the efficiency of utilizing produced biogas (η) provide detailed information on energy performance of the biogas plant. Furthermore, Rf and energy output in relation to total solid mass of feedstock (FO/TS) are useful in comparing biogas production with other energy recovery technologies. As a conclusion it is essential for the comparability of biogas plants that their energy performance would be calculated in a more consistent manner in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today lean-philosophy has gathered a lot of popularity and interest in many industries. This customer-oriented philosophy helps to understand customer’s value creation which can be used to improve efficiency. A comprehensive study of lean and lean-methods in service industry were created in this research. In theoretical part lean-philosophy is studied in different levels which will help to understand its diversity. To support lean, this research also presents basic concepts of process management. Lastly theoretical part presents a development model to support process development in systematical way. The empirical part of the study was performed by performing experimental measurements during the service center’s product return process and by analyzing this data. Measurements were used to map out factors that have a negative influence on the process flow. Several development propositions were discussed to remove these factors. Problems mainly occur due to challenges in controlling customers and due to the lack of responsibility and continuous improvement on operational level. Development propositions concern such factors as change in service center’s physical environment, standardization of work tasks and training. These factors will remove waste in the product return process and support the idea of continuous improvement.