24 resultados para Fermi-density-distribution function with two parameters
Resumo:
It has long been known that amino acids are the building blocks for proteins and govern their folding into specific three-dimensional structures. However, the details of this process are still unknown and represent one of the main problems in structural bioinformatics, which is a highly active research area with the focus on the prediction of three-dimensional structure and its relationship to protein function. The protein structure prediction procedure encompasses several different steps from searches and analyses of sequences and structures, through sequence alignment to the creation of the structural model. Careful evaluation and analysis ultimately results in a hypothetical structure, which can be used to study biological phenomena in, for example, research at the molecular level, biotechnology and especially in drug discovery and development. In this thesis, the structures of five proteins were modeled with templatebased methods, which use proteins with known structures (templates) to model related or structurally similar proteins. The resulting models were an important asset for the interpretation and explanation of biological phenomena, such as amino acids and interaction networks that are essential for the function and/or ligand specificity of the studied proteins. The five proteins represent different case studies with their own challenges like varying template availability, which resulted in a different structure prediction process. This thesis presents the techniques and considerations, which should be taken into account in the modeling procedure to overcome limitations and produce a hypothetical and reliable three-dimensional structure. As each project shows, the reliability is highly dependent on the extensive incorporation of experimental data or known literature and, although experimental verification of in silico results is always desirable to increase the reliability, the presented projects show that also the experimental studies can greatly benefit from structural models. With the help of in silico studies, the experiments can be targeted and precisely designed, thereby saving both money and time. As the programs used in structural bioinformatics are constantly improved and the range of templates increases through structural genomics efforts, the mutual benefits between in silico and experimental studies become even more prominent. Hence, reliable models for protein three-dimensional structures achieved through careful planning and thoughtful executions are, and will continue to be, valuable and indispensable sources for structural information to be combined with functional data.
Resumo:
The rotational speed of high-speed electric machines is over 15 000 rpm. These machines are compact in size when compared to the power rate. As a consequence, the heat fluxes are at a high level and the adequacy of cooling becomes an important design criterion. In the high-speed machines, the air gap between the stator and rotor is a narrow flow channel. The cooling air is produced with a fan and the flow is then directed to the air gap. The flow in the gap does not provide sufficient cooling for the stator end windings, and therefore additional cooling is required. This study investigates the heat transfer and flow fields around the coil end windings when cooling jets are used. As a result, an innovative and new assembly is introduced for the cooling jets, with the benefits of a reduced amount of hot spots, a lower pressure drop, and hence a lower power need for the cooling fan. The gained information can also be applied to improve the cooling of electric machines through geometry modifications. The objective of the research is to determine the locations of the hot spots and to find out induced pressure losses with different jet alternatives. Several possibilities to arrange the extra cooling are considered. In the suggested approach cooling is provided by using a row of air jets. The air jets have three main tasks: to cool the coils effectively by direct impingement jets, to increase and cool down the flow that enters the coil end space through the air gap, and to ensure the correct distribution of the flow by forming an air curtain with additional jets. One important aim of this study is the arrangement of cooling jets in such manner that hot spots can be avoided to wide extent. This enables higher power density in high-speed motors. This cooling system can also be applied to the ordinary electric machines when efficient cooling is needed. The numerical calculations have been performed using a commercial Computational Fluid Dynamics software. Two geometries have been generated: cylindrical for the studied machine and Cartesian for the experimental model. The main parameters include the positions, arrangements and number of jets, the jet diameters, and the jet velocities. The investigated cases have been tested with two widely used turbulence models and using a computational grid of over 500 000 cells. The experimental tests have been made by using a simplified model for the end winding space with cooling jets. In the experiments, an emphasis has been given to flow visualisation. The computational analysis shows good agreement with the experimental results. Modelling of the cooling jet arrangement enables also a better understanding of the complex system of heat transfer at end winding space.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Since the times preceding the Second World War the subject of aircraft tracking has been a core interest to both military and non-military aviation. During subsequent years both technology and configuration of the radars allowed the users to deploy it in numerous fields, such as over-the-horizon radar, ballistic missile early warning systems or forward scatter fences. The latter one was arranged in a bistatic configuration. The bistatic radar has continuously re-emerged over the last eighty years for its intriguing capabilities and challenging configuration and formulation. The bistatic radar arrangement is used as the basis of all the analyzes presented in this work. The aircraft tracking method of VHF Doppler-only information, developed in the first part of this study, is solely based on Doppler frequency readings in relation to time instances of their appearance. The corresponding inverse problem is solved by utilising a multistatic radar scenario with two receivers and one transmitter and using their frequency readings as a base for aircraft trajectory estimation. The quality of the resulting trajectory is then compared with ground-truth information based on ADS-B data. The second part of the study deals with the developement of a method for instantaneous Doppler curve extraction from within a VHF time-frequency representation of the transmitted signal, with a three receivers and one transmitter configuration, based on a priori knowledge of the probability density function of the first order derivative of the Doppler shift, and on a system of blocks for identifying, classifying and predicting the Doppler signal. The extraction capabilities of this set-up are tested with a recorded TV signal and simulated synthetic spectrograms. Further analyzes are devoted to more comprehensive testing of the capabilities of the extraction method. Besides testing the method, the classification of aircraft is performed on the extracted Bistatic Radar Cross Section profiles and the correlation between them for different types of aircraft. In order to properly estimate the profiles, the ADS-B aircraft location information is adjusted based on extracted Doppler frequency and then used for Bistatic Radar Cross Section estimation. The classification is based on seven types of aircraft grouped by their size into three classes.
Resumo:
Uusi EPR-reaktorikonsepti on suunniteltu selviytymään tapauksista, joissa reaktorinsydän sulaa ja sula puhkaisee paineastian. Suojarakennuksen sisälle on suunniteltu alue, jolle sula passiivisesti kerätään, pidätetään ja jäähdytetään. Alueelle laaditaan valurautaelementeistä ns.sydänsieppari, joka tulvitetaan vedellä. Sydänsulan tuottama jälkilämpö siirtyyveteen, mistä se poistetaan suojarakennuksen jälkilämmönpoistojärjestelmän kautta. Suuri osa lämmöstä poistuu sydänsulasta sen yläpuolella olevaan veteen, mutta lämmönsiirron tehostamiseksi myös sydänsiepparin alapuolelle on sijoitettu vedellä täytettävät jäähdytyskanavat. Jotta sydänsiepparin toiminta voitaisiin todentaa, on Lappeenrannan Teknillisellä Yliopistolla rakennettu Volley-koelaitteisto tätä tarkoitusta varten. Koelaitteisto koostuu kahdesta täysimittaisesta valuraudasta tehdystä jäähdytyskanavasta. Sydänsulan tuottamaa jälkilämpöä simuloidaan koelaitteistossa sähkövastuksilla. Tässä työssä kuvataan simulaatioiden suorittaminen ja vertaillaan saatuja arvoja mittaustuloksiin. Työ keskittyy sydänsiepparista jäähdytyskanaviin tapahtuvan lämmönsiirron teoriaan jamekanismeihin. Työssä esitetään kolme erilaista korrelaatiota lämmönsiirtokertoimille allaskiehumisen tapauksessa. Nämä korrelaatiot soveltuvat erityisesti tapauksiin, joissa vain muutamia mittausparametreja on tiedossa. Työn toinen osa onVolley 04 -kokeiden simulointi. Ensin käytettyä simulointitapaa on kelpoistettuvertaamalla tuloksia Volley 04 ja 05 -kokeisiin, joissa koetta voitiin jatkaa tasapainotilaan ja joissa jäähdytteen käyttäytyminen jäähdytyskanavassa on tallennettu myös videokameralla. Näiden simulaatioiden tulokset ovat hyvin samanlaisiakuin mittaustulokset. Korkeammilla lämmitystehoilla kokeissa esiintyi vesi-iskuja, jotka rikkoivat videoinnin mahdollistavia ikkunoita. Tämän johdosta osassa Volley 04 -kokeita ikkunat peitettiin metallilevyillä. Joitakin kokeita jouduttiin keskeyttämään laitteiston suurten lämpöjännitysten johdosta. Tällaisten testien simulaatiot eivät ole yksinkertaisia suorittaa. Veden pinnan korkeudesta ei ole visuaalista havaintoa. Myöskään jäähdytteen tasapainotilanlämpötiloista ei ole tarkkaa tietoa, mutta joitakin oletuksia voidaan tehdä samoilla parametreilla tehtyjen Volley 05 -kokeiden perusteella. Mittaustulokset Volley 04 ja 05 -kokeista, jotka on videoitu ja voitu ajaa tasapainotilaan saakka, antoivat simulaatioiden kanssa hyvin samankaltaisia lämpötilojen arvoja. Keskeytettyjen kokeiden ekstrapolointi tasapainotilaan ei onnistunut kovin hyvin. Kokeet jouduttiin keskeyttämään niin paljon ennen termohydraulista tasapainoa, ettei tasapainotilan reunaehtoja voitu ennustaa. Videonauhoituksen puuttuessa ei veden pinnan korkeudesta saatu lisätietoa. Tuloksista voidaan lähinnä esittää arvioita siitä, mitä suuruusluokkaa mittapisteiden lämpötilat tulevat olemaan. Nämä lämpötilat ovat kuitenkin selvästi alle sydänsiepparissa käytettävän valuraudan sulamislämpötilan. Joten simulaatioiden perusteella voidaan sanoa, etteivät jäähdytyskanavien rakenteet sula, mikäli niissä on pienikin jäähdytevirtaus, eikä useampia kuin muutama vierekkäinen kanava ole täysin kuivana.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
This thesis describes the development of advanced silicon radiation detectors and their characterization by simulations, used in the work for searching elementary particles in the European Organization for Nuclear Research, CERN. Silicon particle detectors will face extremely harsh radiation in the proposed upgrade of the Large Hadron Collider, the future high-energy physics experiment Super-LHC. The increase in the maximal fluence and the beam luminosity up to 1016 neq / cm2 and 1035 cm-2s-1 will require detectors with a dramatic improvement in radiation hardness, when such a fluence will be far beyond the operational limits of the present silicon detectors. The main goals of detector development concentrate on minimizing the radiation degradation. This study contributes mainly to the device engineering technology for developing more radiation hard particle detectors with better characteristics. Also the defect engineering technology is discussed. In the nearest region of the beam in Super-LHC, the only detector choice is 3D detectors, or alternatively replacing other types of detectors every two years. The interest in the 3D silicon detectors is continuously growing because of their many advantages as compared to conventional planar detectors: the devices can be fully depleted at low bias voltages, the speed of the charge collection is high, and the collection distances are about one order of magnitude less than those of planar technology strip and pixel detectors with electrodes limited to the detector surface. Also the 3D detectors exhibit high radiation tolerance, and thus the ability of the silicon detectors to operate after irradiation is increased. Two parameters, full depletion voltage and electric field distribution, is discussed in more detail in this study. The full depletion of the detector is important because the only depleted area in the detector is active for the particle tracking. Similarly, the high electric field in the detector makes the detector volume sensitive, while low-field areas are non-sensitive to particles. This study shows the simulation results of full depletion voltage and the electric field distribution for the various types of 3D detectors. First, the 3D detector with the n-type substrate and partial-penetrating p-type electrodes are researched. A detector of this type has a low electric field on the pixel side and it suffers from type inversion. Next, the substrate is changed to p-type and the detectors having electrodes with one doping type and the dual doping type are examined. The electric field profile in a dual-column 3D Si detector is more uniform than that in the single-type column 3D detector. The dual-column detectors are the best in radiation hardness because of their low depletion voltages and short drift distances.
Resumo:
Työn tavoitteena oli selvittää kannatushihnapituusleikkurissa paperin rullaukseen käytettävän kannatushihnan kestoikään vaikuttavia tekijöitä ja ratkaisuja liian lyhyeen kestoikään liittyviin ongelmiin. Työssä selvitettiin ensin kannatushihnatyyppisen pituusleikkurin toimintaa ja tarkemmin sen rullausosaa ja rullausprosessin parametreja. Kannatushihnan kestoikään vaikuttavia tekijöitä on käsitelty kirjallisuuden ja kohdeyrityksessä olevan materiaalin perusteella. Seuraavaksi esitettiin ideoita kannatushihnan rakenteesta ja materiaaleista sekä vertailtiin niitä keskenään ja nykyiseen hihnan rakenteeseen. Työssä tehtiin nykyistä kannatushihnaa simuloiva kaksiulotteinen elementtimalli. Tämän elementtimallin mukaan pituusleikkurin telojen suunnassa epätasainen hihnan kuluminen johtuu hihnan paksuussuuntaisten jännitysten eroista. Hihnan kulkua arvioitiin kokeellisesti kahdella erityyppisellä hihnalla. Hihnan kulkuun eniten vaikutti esikiristysvoima, jota pienennettäessä kehänopeusero paperirullaa simuloivan kuormittavan telan ja hihnatelan välillä pieneni. Työssä löydettiin mahdollinen yhteys rullauksessa tulostietona saatavan datan sekä hihnan geometrian välille. Polyeteenivaahtomuovit ovat kitkansa puolesta lupaavia hihnan kitkapinnoitteen materiaaliksi. Toisaalta polyeteenivaahtomuovien mekaanisia ominaisuuksia ei juuri tunneta, eikä niitä ole käytetty kulutuskestoa vaativissa sovelluksissa. Hihnan ulkopinnan kulumismekanismia ei täysin tunneta.
Resumo:
This master’s thesis is focused on optimizing the parameters of a distribution transformer with respect to low voltage direct current (LVDC) distribution system. One of the main parts of low voltage direct current (LVDC) distribution system is transformer. It is studied from several viewpoints like filtering capabilities of harmonics caused by rectifier, losses and short circuit current limiting Determining available short circuit currents is one of the most important aspects of designing power distribution systems. Short circuits and their effects must be considered in selecting electrical equipment, circuit protection and other devices.
Resumo:
It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.
Resumo:
Deregulation of the electricity sector liberated the electricity sale and production for competitive forces while in the network business, electricity transmission and distribution, natural monopoly positions were recognised. Deregulation was accompanied by efficiencyoriented thinking on the whole electricity supply industry. For electricity distribution this meant a transition from a public service towards profit-driven business guided by economic regulation. Regulation is the primary means to enforce societal and other goals in the regulated monopoly sector. The design of economic regulation is concerned with two main attributes; end-customer price and quality of electricity distribution services. Regulation limits the costs of the regulated company but also defines the desired quality of monopoly services. The characteristics of the regulatory framework and the incentives it provides are therefore decisive for the electricity distribution sector. Regulation is not a static factor; changes in the regulatory practices cause discontinuity points, which in turn generate risks. A variety of social and environmental concerns together with technological advancements have emphasised the relevance of quality regulation, which is expected to lead to the large-scale replacement of overhead lines with underground cables. The electricity network construction activity is therefore currently witnessing revolutionary changes in its competitive landscape. In a business characterised by high statutory involvement and a high level of sunk costs, recognising and understanding the regulatory risks becomes a key success factor. As a response, electricity distribution companies have turned into outsourcing to attain efficiency and quality goals. This doctoral thesis addresses the impacts of regulatory risks on electricity network construction, which is a commonly outsourced activity in the electricity distribution network sector. The chosen research approach is characterised as an action analytical research on account of the fact that regulatory risks are greatly dependent on the individual nature of the regulatory regime applied in the electricity distribution sector. The main contribution of this doctoral thesis is to develop a concept for recognising and managing the business risks stemming from economic regulation. The degree of outsourcing in the sector is expected to increase in years to come. The results of the research provide new knowledge to manage the regulatory risks when outsourcing services.
Resumo:
In this thesis three experiments with atomic hydrogen (H) at low temperatures T<1 K are presented. Experiments were carried out with two- (2D) and three-dimensional (3D) H gas, and with H atoms trapped in solid H2 matrix. The main focus of this work is on interatomic interactions, which have certain specific features in these three systems considered. A common feature is the very high density of atomic hydrogen, the systems are close to quantum degeneracy. Short range interactions in collisions between atoms are important in gaseous H. The system of H in H2 differ dramatically because atoms remain fixed in the H2 lattice and properties are governed by long-range interactions with the solid matrix and with H atoms. The main tools in our studies were the methods of magnetic resonance, with electron spin resonance (ESR) at 128 GHz being used as the principal detection method. For the first time in experiments with H in high magnetic fields and at low temperatures we combined ESR and NMR to perform electron-nuclear double resonance (ENDOR) as well as coherent two-photon spectroscopy. This allowed to distinguish between different types of interactions in the magnetic resonance spectra. Experiments with 2D H gas utilized the thermal compression method in homogeneous magnetic field, developed in our laboratory. In this work methods were developed for direct studies of 3D H at high density, and for creating high density samples of H in H2. We measured magnetic resonance line shifts due to collisions in the 2D and 3D H gases. First we observed that the cold collision shift in 2D H gas composed of atoms in a single hyperfine state is much smaller than predicted by the mean-field theory. This motivated us to carry out similar experiments with 3D H. In 3D H the cold collision shift was found to be an order of magnitude smaller for atoms in a single hyperfine state than that for a mixture of atoms in two different hyperfine states. The collisional shifts were found to be in fair agreement with the theory, which takes into account symmetrization of the wave functions of the colliding atoms. The origin of the small shift in the 2D H composed of single hyperfine state atoms is not yet understood. The measurement of the shift in 3D H provides experimental determination for the difference of the scattering lengths of ground state atoms. The experiment with H atoms captured in H2 matrix at temperatures below 1 K originated from our work with H gas. We found out that samples of H in H2 were formed during recombination of gas phase H, enabling sample preparation at temperatures below 0.5 K. Alternatively, we created the samples by electron impact dissociation of H2 molecules in situ in the solid. By the latter method we reached highest densities of H atoms reported so far, 3.5(5)x1019 cm-3. The H atoms were found to be stable for weeks at temperatures below 0.5 K. The observation of dipolar interaction effects provides a verification for the density measurement. Our results point to two different sites for H atoms in H2 lattice. The steady-state nuclear polarizations of the atoms were found to be non-thermal. The possibility for further increase of the impurity H density is considered. At higher densities and lower temperatures it might be possible to observe phenomena related to quantum degeneracy in solid.
Resumo:
Työn tarkoituksena oli tarkastella uutta kuvantamistekniikkaa käyttäen happikaasun dispergointia keskisakeuksisen massasuspension joukkoon laboratoriosekoittimessa. Työssä pyrittiin tarkastelemaan muodostuvan dispersion homogeenisuutta neljästä eri kuvauspisteestä sekoittimen kannesta ja kyljestä. Samalla tarkasteltiin myös sekoittimen tehonkulutusta sekä tehonkulutuksen ja aikaansaadun dispersion välistä yhteyttä. Työn yhtenä tarkoituksena oli myös tarkastella uuden kuvantamistekniikan mahdollisuuksia tämäntyyppisissä sovellutuksissa, sillä työ kuuluu PulpVision-projektiin, jossa kehitetään massa- ja paperiteollisuuden uusia konenäkösovellutuksia. Työn kokeellinen osuus koostui sekoituskokeista, joissa tarkasteltiin neljästä kuvauspisteestä kahdella sekoittimen nopeudella mänty- ja koivususpensioihin muodostuvaa kuplakokojakaumaa. Sekoituskokeiden lisäksi tehtiin tehonkulutuskokeita, joissa tarkasteltiin sekoittimen tehonkulutusta sekoittimen täyttöasteen funktiona koivu- ja mäntysuspensioilla sekä vedellä. Työn tuloksien perusteella todettiin, että koivususpensiosta havaittujen kuplien pinta-ala oli noin puolet mäntysuspensiosta havaittujen kuplien pinta-alasta. Sekoittimen roottorin pyörimisnopeuden puolittuessa suspensioon dispergoidun hapen kuplakoko kasvoi huomattavasti. Neljästä kuvausyhteestä tarkasteltuna havaittiin pienimpien kuplien esiintyvän sekoittimen alaosassa. Mäntysuspension tehonkulutuksen havaittiin kasvavan viidenneksellä, kun sekoittimen täyttöaste kasvoi 10 %, kun taas koivususpension tehonkulutuksen kasvu oli tästä vain puolet. Kuvantamislaitteiston todettiin olevan tämänkaltaiseen sovellutukseen riittävä, varsinkin kun valonlähteenä käytetään pulssilaseria.