51 resultados para Spectral method with domain decomposition
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
Fatigue life assessment of weldedstructures is commonly based on the nominal stress method, but more flexible and accurate methods have been introduced. In general, the assessment accuracy is improved as more localized information about the weld is incorporated. The structural hot spot stress method includes the influence of macro geometric effects and structural discontinuities on the design stress but excludes the local features of the weld. In this thesis, the limitations of the structural hot spot stress method are discussed and a modified structural stress method with improved accuracy is developed and verified for selected welded details. The fatigue life of structures in the as-welded state consists mainly of crack growth from pre-existing cracks or defects. Crack growth rate depends on crack geometry and the stress state on the crack face plane. This means that the stress level and shape of the stress distribution in the assumed crack path governs thetotal fatigue life. In many structural details the stress distribution is similar and adequate fatigue life estimates can be obtained just by adjusting the stress level based on a single stress value, i.e., the structural hot spot stress. There are, however, cases for which the structural stress approach is less appropriate because the stress distribution differs significantly from the more common cases. Plate edge attachments and plates on elastic foundations are some examples of structures with this type of stress distribution. The importance of fillet weld size and weld load variation on the stress distribution is another central topic in this thesis. Structural hot spot stress determination is generally based on a procedure that involves extrapolation of plate surface stresses. Other possibilities for determining the structural hot spot stress is to extrapolate stresses through the thickness at the weld toe or to use Dong's method which includes through-thickness extrapolation at some distance from the weld toe. Both of these latter methods are less sensitive to the FE mesh used. Structural stress based on surface extrapolation is sensitive to the extrapolation points selected and to the FE mesh used near these points. Rules for proper meshing, however, are well defined and not difficult to apply. To improve the accuracy of the traditional structural hot spot stress, a multi-linear stress distribution is introduced. The magnitude of the weld toe stress after linearization is dependent on the weld size, weld load and plate thickness. Simple equations have been derived by comparing assessment results based on the local linear stress distribution and LEFM based calculations. The proposed method is called the modified structural stress method (MSHS) since the structural hot spot stress (SHS) value is corrected using information on weld size andweld load. The correction procedure is verified using fatigue test results found in the literature. Also, a test case was conducted comparing the proposed method with other local fatigue assessment methods.
Resumo:
The purpose of this study was to investigate some important features of granular flows and suspension flows by computational simulation methods. Granular materials have been considered as an independent state ofmatter because of their complex behaviors. They sometimes behave like a solid, sometimes like a fluid, and sometimes can contain both phases in equilibrium. The computer simulation of dense shear granular flows of monodisperse, spherical particles shows that the collisional model of contacts yields the coexistence of solid and fluid phases while the frictional model represents a uniform flow of fluid phase. However, a comparison between the stress signals from the simulations and experiments revealed that the collisional model would result a proper match with the experimental evidences. Although the effect of gravity is found to beimportant in sedimentation of solid part, the stick-slip behavior associated with the collisional model looks more similar to that of experiments. The mathematical formulations based on the kinetic theory have been derived for the moderatesolid volume fractions with the assumption of the homogeneity of flow. In orderto make some simulations which can provide such an ideal flow, the simulation of unbounded granular shear flows was performed. Therefore, the homogeneous flow properties could be achieved in the moderate solid volume fractions. A new algorithm, namely the nonequilibrium approach was introduced to show the features of self-diffusion in the granular flows. Using this algorithm a one way flow can beextracted from the entire flow, which not only provides a straightforward calculation of self-diffusion coefficient but also can qualitatively determine the deviation of self-diffusion from the linear law at some regions nearby the wall inbounded flows. Anyhow, the average lateral self-diffusion coefficient, which was calculated by the aforementioned method, showed a desirable agreement with thepredictions of kinetic theory formulation. In the continuation of computer simulation of shear granular flows, some numerical and theoretical investigations were carried out on mass transfer and particle interactions in particulate flows. In this context, the boundary element method and its combination with the spectral method using the special capabilities of wavelets have been introduced as theefficient numerical methods to solve the governing equations of mass transfer in particulate flows. A theoretical formulation of fluid dispersivity in suspension flows revealed that the fluid dispersivity depends upon the fluid properties and particle parameters as well as the fluid-particle and particle-particle interactions.
Resumo:
Software faults are expensive and cause serious damage, particularly if discovered late or not at all. Some software faults tend to be hidden. One goal of the thesis is to figure out the status quo in the field of software fault elimination since there are no recent surveys of the whole area. Basis for a structural framework is proposed for this unstructured field, paying attention to compatibility and how to find studies. Bug elimination means are surveyed, including bug knowhow, defect prevention and prediction, analysis, testing, and fault tolerance. The most common research issues for each area are identified and discussed, along with issues that do not get enough attention. Recommendations are presented for software developers, researchers, and teachers. Only the main lines of research are figured out. The main emphasis is on technical aspects. The survey was done by performing searches in IEEE, ACM, Elsevier, and Inspect databases. In addition, a systematic search was done for a few well-known related journals from recent time intervals. Some other journals, some conference proceedings and a few books, reports, and Internet articles have been investigated, too. The following problems were found and solutions for them discussed. Quality assurance is testing only is a common misunderstanding, and many checks are done and some methods applied only in the late testing phase. Many types of static review are almost forgotten even though they reveal faults that are hard to be detected by other means. Other forgotten areas are knowledge of bugs, knowing continuously repeated bugs, and lightweight means to increase reliability. Compatibility between studies is not always good, which also makes documents harder to understand. Some means, methods, and problems are considered method- or domain-specific when they are not. The field lacks cross-field research.
Resumo:
Kirjallisuustyössä tutkittiin tehostetun hapetuksen menetelmiä (engl. Advanced Oxidation Processes, AOPs) kloorifenolien käsittelyssä. Tutkittava aine valittiin US EPA:n (United States Environmental Protection Agency) ympäristölle haitallisten aineiden listalta. Työssä tutkitut AOP-menetelmät olivat otsonointi kasvatetussa pH:ssa, O3/H2O2-prosessi, fotolyyttinen otsonointi (O3/UV), H2O2/UV-prosessi ja Fenton-prosessi (H2O2+Fe2+). AOP-käsittelyssä OH-radikaalien oletetaan pääosin aiheuttavan epäpuhtauksien hapettumisen. Kirjallisuustyössä tutkittiin OH-radikaaleihin vaikuttavien parametrien, kuten pH:n, lämpötilan sekä hapettimien ja hapetettavan aineen konsentraatioiden vaikutusta kloorifenolien hapetusprosessissa. Työn tarkoituksena oli selvittää tehokkain AOP-käsittely kloorifenoleja sisältävien jätevesien käsittelylle. AOP-käsittelyjen tehokkuutta verrattiin hajoamisnopeusvakioiden, puoliintumisaikojen sekä hapettimen kemikaalikulutuksen ja kustannuksen perusteella. Fenton-prosessin ja otsonoinnin pH:ssa 9 todettiin olevan tehokkaimpia menetelmiä kloorifenolien hapetuksessa. Fenton-prosessin hapetusnopeus oli tehokkaampaa 4-CP:n ja 2,4-DCP:n hapetuksessa, kun taas otsonointi pH:ssa 9 hapetti nopeammin 2,3,4,6-TeCP:n ja 2,4,6-TCP:n. Kustannustehokkuuden perusteella Fenton-prosessi oli otsonointia tehokkaampi. Parhaan menetelmän valinta kloorifenoleiden poistamiseksi oli vaikeaa, sillä useissa tutkimuksissa oli tutkittu vain yhtä menetelmää. Lisäksi eri tutkimuksissa käytetyt prosessiolosuhteet olivat erilaiset, joka hankaloitti tutkimusten vertailua. Lopullinen AOP-menetelmän valinta tulisikin suorittaa vasta laboratoriotutkimusten jälkeen.
Resumo:
The oxidation potential of pulsed corona discharge concerning aqueous impurities is limited in respect to certain refractory compounds. This may be enhanced in combination of the discharge with catalysis/photocatalysis as developed in homogeneous gas-phase reactions. The objective of the work consists of testing the hypothesis of oxidation potential enhancement in combination of the discharge with TiO2 photocatalysis applied to aqueous solutions of refractory oxalate. Meglumine acridone acetate was included for meeting the practical needs. The experimental research was undertaken into oxidation of aqueous solutions under conditions of various target pollutant concentrations, pH and the pulse repetition rate with plain electrodes and the electrodes with TiO2 attached to their surface. The results showed no positive influence of the photocatalyst, the pollutants were oxidized with the rate identical within the accuracy of measurements. The possible explanation for the observed inefficiency may include low UV irradiance, screening effect of water and generally low oxidation rate in photocatalytic reactions. Further studies might include combination of electric discharge with ozone decomposition/radical formation catalysts.
Resumo:
Wood-based bioprocesses present one of the fields of interest with the most potential in the circular economy. Expanding the use of wood raw material in sustainable industrial processes is acknowledged on both a global and a regional scale. This thesis concerns the application of a capillary zone electrophoresis (CZE) method with the aim of monitoring wood-based bioprocesses. The range of detectable carbohydrate compounds is expanded to furfural and polydatin in aquatic matrices. The experimental portion has been conducted on a laboratory scale with samples imitating process samples. This thesis presents a novel strategy for the uncertainty evaluation via in-house validation. The focus of the work is on the uncertainty factors of the CZE method. The CZE equipment is sensitive to ambient conditions. Therefore, a proper validation is essential for robust application. This thesis introduces a tool for process monitoring of modern bioprocesses. As a result, it is concluded that the applied CZE method provides additional results to the analysed samples and that the profiling approach is suitable for detecting changes in process samples. The CZE method shows significant potential in process monitoring because of the capability of simultaneously detecting carbohydrate-related compound clusters. The clusters can be used as summary terms, indicating process variation and drift.
Resumo:
Rare-earth based upconverting nanoparticles (UCNPs) have attracted much attention due to their unique luminescent properties. The ability to convert multiple photons of lower energy to ones with higher energy through an upconversion (UC) process offers a wide range of applications for UCNPs. The emission intensities and wavelengths of UCNPs are important performance characteristics, which determine the appropriate applications. However, insufficient intensities still limit the use of UCNPs; especially the efficient emission of blue and ultraviolet (UV) light via upconversion remains challenging, as these events require three or more near-infrared (NIR) photons. The aim of the study was to enhance the blue and UV upconversion emission intensities of Tm3+ doped NaYF4 nanoparticles and to demonstrate their utility in in vitro diagnostics. As the distance between the sensitizer and the activator significantly affect the energy transfer efficiency, different strategies were explored to change the local symmetry around the doped lanthanides. One important strategy is the intentional co-doping of active (participate in energy transfer) or passive (do not participate in energy transfer) impurities into the host matrix. The roles of doped passive impurities (K+ and Sc3+) in enhancing the blue and UV upconversions, as well as in influencing the intense UV upconversion emission through excess sensitization (active impurity) were studied. Additionally, the effects of both active and passive impurity doping on the morphological and optical performance of UCNPs were investigated. The applicability of UV emitting UCNPs as an internal light source for glucose sensing in a dry chemistry test strip was demonstrated. The measurements were in agreement with the traditional method based on reflectance measurements using an external UV light source. The use of UCNPs in the glucose test strip offers an alternative detection method with advantages such as control signals for minimizing errors and high penetration of the NIR excitation through the blood sample, which gives more freedom for designing the optical setup. In bioimaging, the excitation of the UCNPs in the transparent IR region of the tissue permits measurements, which are free of background fluorescence and have a high signal-to-background ratio. In addition, the narrow emission bandwidth of the UCNPs enables multiplexed detections. An array-in-well immunoassay was developed using two different UC emission colours. The differentiation between different viral infections and the classification of antibody responses were achieved based on both the position and colour of the signal. The study demonstrates the potential of spectral and spatial multiplexing in the imaging based array-in-well assays.
Resumo:
Supernova (SN) is an explosion of a star at the end of its lifetime. SNe are classified to two types, namely type I and II through the optical spectra. They have been categorised based on their explosion mechanism, to core collapse supernovae (CCSNe) and thermonuclear supernovae. The CCSNe group which includes types IIP, IIn, IIL, IIb, Ib, and Ic are produced when a massive star with initial mass more than 8 M⊙ explodes due to a collapse of its iron core. On the other hand, thermonuclear SNe originate from white dwarfs (WDs) made of carbon and oxygen, in a binary system. Infrared astronomy covers observations of astronomical objects in infrared radiation. The infrared sky is not completely dark and it is variable. Observations of SNe in the infrared give different information than optical observations. Data reduction is required to correct raw data from for example unusable pixels and sky background. In this project, the NOTCam package in the IRAF was used for the data reduction. For measuring magnitudes of SNe, the aperture photometry method with the Gaia program was used. In this Master’s thesis, near-infrared (NIR) observations of three supernovae of type IIn (namely LSQ13zm, SN 2009ip and SN2011jb), one type IIb (SN2012ey), in addition to one type Ic (SN2012ej) and type IIP (SN 2013gd) are studied with emphasis on luminosity and colour evolution. All observations were done with the Nordic Optical Telescope (NOT). Here, we used the classification by Mattila & Meikle (2001) [76], where the SNe are differentiated by the infrared light curves into two groups, namely ’ordinary’ and ’slowly declining’. The light curves and colour evolution of these supernovae were obtained in J, H and Ks bands. In this study, our data, combined with other observations, provide evidence to categorize LSQ13zm, SN 2012ej and SN 2012ey as being part of the ordinary type. We found interesting NIR behaviour of SN 2011jb, which lead it to be classified as a slowly declining type.
Resumo:
Tässä diplomityössä suunnitellaan yksivaiheisen turbiinin ylisooninen staattori ja alisooninen roottori, tulo-osa ja diffuusori. Työn alussa tarkastellaan aksiaaliturbiinin käyttökohteita ja teoriaa, jonka jälkeen esitetään suunnittelun perustana olevat menetelmät ja periaatteet. Perussuunnittelu tehdään Traupelinmenetelmällä WinAxtu 1.1 suunnitteluohjelmalla ja hyötysuhde arvioidaan lisäksiExcel-pohjaisella laskennalla. Ylisooninen staattori suunnitellaan perussuunnittelun tuloksiin perustuen, soveltamalla karakteristikoiden menetelmää suuttimen laajenevaan osaan ja pinta-alasuhteita suppenevaan osaan. Roottorin keskiviiva piirretään Sahlbergin menetelmällä ja siiven muoto määritetään A3K7 paksuusjakauman sekä tiheän siipihilan muotoilun periaatteita yhdistämällä. Tulo-osa suunnitellaan mahdollisimman jouhevaksi geometriatietojen ja kirjallisuuden esimerkkien mukaisesti. Lopuksi tulo-osaa mallinnetaan CFD-laskennalla. Diffuusori suunnitellaan käyttämällä soveltuvin osin kirjallisuudessa esitettyjätietoja, tulo-osan geometriaa ja CFD-laskentaa. Suunnittelutuloksia verrataan lopuksi kirjallisuudessa esitettyihin tuloksiin ja arvioidaan suunnittelun onnistumista sekä mahdollisia ongelmakohtia.
Resumo:
Paperin pinnan karheus on yksi paperin laatukriteereistä. Sitä mitataan fyysisestipaperin pintaa mittaavien laitteiden ja optisten laitteiden avulla. Mittaukset vaativat laboratorioolosuhteita, mutta nopeammille, suoraan linjalla tapahtuville mittauksilla olisi tarvetta paperiteollisuudessa. Paperin pinnan karheus voidaan ilmaista yhtenä näytteelle kohdistuvana karheusarvona. Tässä työssä näyte on jaettu merkitseviin alueisiin, ja jokaiselle alueelle on laskettu erillinen karheusarvo. Karheuden mittaukseen on käytetty useita menetelmiä. Yleisesti hyväksyttyä tilastollista menetelmää on käytetty tässä työssä etäisyysmuunnoksen lisäksi. Paperin pinnan karheudenmittauksessa on ollut tarvetta jakaa analysoitava näyte karheuden perusteella alueisiin. Aluejaon avulla voidaan rajata näytteestä selvästi karheampana esiintyvät alueet. Etäisyysmuunnos tuottaa alueita, joita on analysoitu. Näistä alueista on muodostettu yhtenäisiä alueita erilaisilla segmentointimenetelmillä. PNN -menetelmään (Pairwise Nearest Neighbor) ja naapurialueiden yhdistämiseen perustuvia algoritmeja on käytetty.Alueiden jakamiseen ja yhdistämiseen perustuvaa lähestymistapaa on myös tarkasteltu. Segmentoitujen kuvien validointi on yleensä tapahtunut ihmisen tarkastelemana. Tämän työn lähestymistapa on verrata yleisesti hyväksyttyä tilastollista menetelmää segmentoinnin tuloksiin. Korkea korrelaatio näiden tulosten välillä osoittaa onnistunutta segmentointia. Eri kokeiden tuloksia on verrattu keskenään hypoteesin testauksella. Työssä on analysoitu kahta näytesarjaa, joidenmittaukset on suoritettu OptiTopolla ja profilometrillä. Etäisyysmuunnoksen aloitusparametrit, joita muutettiin kokeiden aikana, olivat aloituspisteiden määrä ja sijainti. Samat parametrimuutokset tehtiin kaikille algoritmeille, joita käytettiin alueiden yhdistämiseen. Etäisyysmuunnoksen jälkeen korrelaatio oli voimakkaampaa profilometrillä mitatuille näytteille kuin OptiTopolla mitatuille näytteille. Segmentoiduilla OptiTopo -näytteillä korrelaatio parantui voimakkaammin kuin profilometrinäytteillä. PNN -menetelmän tuottamilla tuloksilla korrelaatio oli paras.
Resumo:
The competitiveness comparison is carried out for merely electricity producing alternatives. In Finland, further construction of CHP (combined heat and power) power plants will continue and cover part of the future power supply deficit, but also new condensing power plant capacity will be needed. The following types of power plants are studied: - nuclear power plant, - coal-fired condensing power plant - combined cycle gas turbine plant, - peat-fired condensing power plant. - wood-fired condensing power plant - wind power plant The calculations have been made using the annuity method with a real interest rate of 5 % perannum and with a fixed price level as of March 2003. With the annual full load utilization time of 8000 hours the nuclear electricity would cost 23,7 ¤/MWh, the gas based electricity 32,3 ¤/MWh and coal based electricity 28,1 ¤/MWh. If the influence of emission trading is taken into account,the advantage of the nuclear power will still be improved. Inorder to study the impact of changes in the input data, a sensitivity analysis has been carried out. It reveals that the advantage of the nuclear power is quite clear. E.g. the nuclear electricity is rather insensitive tothe changes of the uranium price, whereas for natural gas alternative the rising trend of gas price causes the greatest risk.
Resumo:
The active magnetic bearings have recently been intensively developed because of noncontact support having several advantages compared to conventional bearings. Due to improved materials, strategies of control, and electrical components, the performance and reliability of the active magnetic bearings are improving. However, additional bearings, retainer bearings, still have a vital role in the applications of the active magnetic bearings. The most crucial moment when the retainer bearings are needed is when the rotor drops from the active magnetic bearings on the retainer bearings due to component or power failure. Without appropriate knowledge of the retainer bearings, there is a chance that an active magnetic bearing supported rotor system will be fatal in a drop-down situation. This study introduces a detailed simulation model of a rotor system in order to describe a rotor drop-down situation on the retainer bearings. The introduced simulation model couples a finite element model with component mode synthesis and detailed bearing models. In this study, electrical components and electromechanical forces are not in the focus. The research looks at the theoretical background of the finite element method with component mode synthesis that can be used in the dynamic analysis of flexible rotors. The retainer bearings are described by using two ball bearing models, which include damping and stiffness properties, oil film, inertia of rolling elements and friction between races and rolling elements. Thefirst bearing model assumes that the cage of the bearing is ideal and that the cage holds the balls in their predefined positions precisely. The second bearing model is an extension of the first model and describes the behavior of the cageless bearing. In the bearing model, each ball is described by using two degrees of freedom. The models introduced in this study are verified with a corresponding actual structure. By using verified bearing models, the effects of the parameters of the rotor system onits dynamics during emergency stops are examined. As shown in this study, the misalignment of the retainer bearings has a significant influence on the behavior of the rotor system in a drop-down situation. In this study, a stability map of the rotor system as a function of rotational speed of the rotor and the misalignment of the retainer bearings is presented. In addition, the effects of parameters of the simulation procedure and the rotor system on the dynamics of system are studied.
Resumo:
The Tandem-GMAW method is the latest development as the consequences of improvements in the welding methods. The twin-wire and then the Tandem-method with the separate power sources has got a remarkable place in the welding of many types of materials with different joint types. The biggest advantage of Tandem welding method is the flexibility of choosing both the electrodes of different types from each other according to the type of the parent material. This is possible because of the feasibility of setting the separate welding parameters for both the wires. In this thesis work the effect of the variation in three parameters on the weld bead in Tandem-GMA welding method is studied. Theses three parameters are the wire feed rate in the slave wire, the wire feed rate in the master wire and the voltage difference in both the wires. The results are then compared to study the behaviour of the weld bead with the change in these parameters.
Resumo:
Tämän diplomityön tarkoituksena oli esittää menetelmä erääseen ohjelmistoon toteutettavista muutoksista aiheutuvien riskien hallintaan. Ohjelmistoa käyttää päivittäin useita satoja henkilöitä ja sen ongelmaton toiminta on ohjelmiston omistavalle asiakkaalle erittäin tärkeää. Ohjelmiston ja sen kehitystyön kannalta riski on asianomistajan tavoitteita uhkaava menetyksen mahdollisuus tai menetykseen liittyvä ominaisuus, tekijä tai toiminta. Tämän työn yhteydessä asianomistaja on yritys, joka on toteuttanut nykyisen ohjelmiston ja on vastuussa ohjelmiston jatkokehityksestä. Yrityksen riskienhallintatarpeita vastaava ratkaisu pyritään löytämään perehtymällä riskienhallinnan perusteisiin sekä kahteen erityisesti ohjelmistotuotantoon tarkoitettuun riskienhallintamenetelmään. Riskienhallinnan kehittämisen kannalta on tärkeää, että ohjelmistotuotannon tyypilliset virheet onnistutaan pääsääntöisesti välttämään. Riskienhallinnan yleisempien virheiden tiedostamisesta on suurta hyötyä omaa riskienhallintaa kehitettäessä. Ohjelmiston kehitysorganisaation systemaattinen tapa toteuttaa ohjelmistomuutoksia perustuu ohjelmistotuotantoon tarkoitetun tuotteenhallintaohjelman käyttöön. Tuotteenhallintaohjelmassa muutospyyntö on ohjelmiston kehitystyön perusyksikkö, johon riskienhallintatoimet on pyrittävä kohdistamaan. Yrityksen tarpeita vastaava riskienhallintamalli rakennetaan lisäämällä Riskit-menetelmän mukainen riskienhallintaprosessi osaksi muutospyynnön systemaattista käsittelyprosessia. Työn tuloksena aikaansaadun mallin mukaista riskienhallintaa voidaan käytännössä harjoittaa usealla eri tavalla. Arvioiden perusteella kaavionluonti- ja tekstinkäsittelyohjelma ovat riittävät työkalut riskienhallinnan käytännön toteutusta varten. Kokemukset uudesta riskienhallintamenetelmästä osoittivat sen käyttökelpoiseksi. Menetelmän käyttöönoton sujuvuuden varmistamiseksi, riskienhallintatoimet kannattaa kuitenkin aluksi kohdistaa yksittäistä muutospyyntöä suurempaan kokonaisuuteen.