41 resultados para Data Migration Processes Modeling
Resumo:
The aim of this Master’s thesis is to find out how should internal control be structured in a Finnish retail company in order to fulfil the requirements set out in the Finnish Corporate Governance Code and to be value adding for the company as well as to analyse the added value that a structured and centrally led internal control can provide for the case company. The underlying fundamental theoretical framework of the study essentially stems from the theory of the firm; the agent-principal problem is the primary motivator for internal control. Regulatory requirements determine the thresholds that the internal control of a company must reach. The research was carried out as a case study and methodically the study is qualitative and the empirical data gathering was conducted by interviews and by participant observation. The data gathered (processes, controls etc.) is used to understand the control environment of the company and to assess the current state of internal control. Deficiencies and other points of development identified are then discussed.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Summary
Resumo:
Väitöstutkimuksessa on tarkasteltuinfrapunaspektroskopian ja monimuuttujaisten aineistonkäsittelymenetelmien soveltamista kiteytysprosessin monitoroinnissa ja kidemäisen tuotteen analysoinnissa. Parhaillaan kiteytysprosessitutkimuksessa maailmanlaajuisesti tutkitaan intensiivisesti erilaisten mittausmenetelmien soveltamista kiteytysprosessin ilmiöidenjatkuvaan mittaamiseen niin nestefaasista kuin syntyvistä kiteistäkin. Lisäksi tuotteen karakterisointi on välttämätöntä tuotteen laadun varmistamiseksi. Erityisesti lääkeaineiden valmistuksessa kiinnostusta tämäntyyppiseen tutkimukseen edistää Yhdysvaltain elintarvike- ja lääkeaineviraston (FDA) prosessianalyyttisiintekniikoihin (PAT) liittyvä ohjeistus, jossa määritellään laajasti vaatimukset lääkeaineiden valmistuksessa ja tuotteen karakterisoinnissa tarvittaville mittauksille turvallisten valmistusprosessien takaamiseksi. Jäähdytyskiteytyson erityisesti lääketeollisuudessa paljon käytetty erotusmenetelmä kiinteän raakatuotteen puhdistuksessa. Menetelmässä puhdistettava kiinteä raaka-aine liuotetaan sopivaan liuottimeen suhteellisen korkeassa lämpötilassa. Puhdistettavan aineen liukoisuus käytettävään liuottimeen laskee lämpötilan laskiessa, joten systeemiä jäähdytettäessä liuenneen aineen konsentraatio prosessissa ylittää liukoisuuskonsentraation. Tällaiseen ylikylläiseen systeemiin pyrkii muodostumaan uusia kiteitä tai olemassa olevat kiteet kasvavat. Ylikylläisyys on yksi tärkeimmistä kidetuotteen laatuun vaikuttavista tekijöistä. Jäähdytyskiteytyksessä syntyvän tuotteen ominaisuuksiin voidaan vaikuttaa mm. liuottimen valinnalla, jäähdytyprofiililla ja sekoituksella. Lisäksi kiteytysprosessin käynnistymisvaihe eli ensimmäisten kiteiden muodostumishetki vaikuttaa tuotteen ominaisuuksiin. Kidemäisen tuotteen laatu määritellään kiteiden keskimääräisen koon, koko- ja muotojakaumansekä puhtauden perusteella. Lääketeollisuudessa on usein vaatimuksena, että tuote edustaa tiettyä polymorfimuotoa, mikä tarkoittaa molekyylien kykyä järjestäytyä kidehilassa usealla eri tavalla. Edellä mainitut ominaisuudet vaikuttavat tuotteen jatkokäsiteltävyyteen, kuten mm. suodattuvuuteen, jauhautuvuuteen ja tabletoitavuuteen. Lisäksi polymorfiamuodolla on vaikutusta moniin tuotteen käytettävyysominaisuuksiin, kuten esim. lääkeaineen liukenemisnopeuteen elimistössä. Väitöstyössä on tutkittu sulfatiatsolin jäähdytyskiteytystä käyttäen useita eri liuotinseoksia ja jäähdytysprofiileja sekä tarkasteltu näiden tekijöiden vaikutustatuotteen laatuominaisuuksiin. Infrapunaspektroskopia on laajalti kemian alan tutkimuksissa sovellettava menetelmä. Siinä mitataan tutkittavan näytteenmolekyylien värähtelyjen aiheuttamia spektrimuutoksia IR alueella. Tutkimuksessa prosessinaikaiset mittaukset toteutettiin in-situ reaktoriin sijoitettavalla uppoanturilla käyttäen vaimennettuun kokonaisheijastukseen (ATR) perustuvaa Fourier muunnettua infrapuna (FTIR) spektroskopiaa. Jauhemaiset näytteet mitattiin off-line diffuusioheijastukseen (DRIFT) perustuvalla FTIR spektroskopialla. Monimuuttujamenetelmillä (kemometria) voidaan useita satoja, jopa tuhansia muuttujia käsittävä spektridata jalostaa kvalitatiiviseksi (laadulliseksi) tai kvantitatiiviseksi (määrälliseksi) prosessia kuvaavaksi informaatioksi. Väitöstyössä tarkasteltiin laajasti erilaisten monimuuttujamenetelmien soveltamista mahdollisimman monipuolisen prosessia kuvaavan informaation saamiseksi mitatusta spektriaineistosta. Väitöstyön tuloksena on ehdotettu kalibrointirutiini liuenneen aineen konsentraation ja edelleen ylikylläisyystason mittaamiseksi kiteytysprosessin aikana. Kalibrointirutiinin kehittämiseen kuuluivat aineiston hyvyyden tarkastelumenetelmät, aineiston esikäsittelymenetelmät, varsinainen kalibrointimallinnus sekä mallin validointi. Näin saadaan reaaliaikaista informaatiota kiteytysprosessin ajavasta voimasta, mikä edelleen parantaa kyseisen prosessin tuntemusta ja hallittavuutta. Ylikylläisyystason vaikutuksia syntyvän kidetuotteen laatuun seurattiin usein kiteytyskokein. Työssä on esitetty myös monimuuttujaiseen tilastolliseen prosessinseurantaan perustuva menetelmä, jolla voidaan ennustaa spontaania primääristä ytimenmuodostumishetkeä mitatusta spektriaineistosta sekä mahdollisesti päätellä ydintymisessä syntyvä polymorfimuoto. Ehdotettua menetelmää hyödyntäen voidaan paitsi ennakoida kideytimien muodostumista myös havaita mahdolliset häiriötilanteet kiteytysprosessin alkuhetkillä. Syntyvää polymorfimuotoa ennustamalla voidaan havaita ei-toivotun polymorfin ydintyminen,ja mahdollisesti muuttaa kiteytyksen ohjausta halutun polymorfimuodon saavuttamiseksi. Monimuuttujamenetelmiä sovellettiin myös kiteytyspanosten välisen vaihtelun määrittämiseen mitatusta spektriaineistosta. Tämäntyyppisestä analyysistä saatua informaatiota voidaan hyödyntää kiteytysprosessien suunnittelussa ja optimoinnissa. Väitöstyössä testattiin IR spektroskopian ja erilaisten monimuuttujamenetelmien soveltuvuutta kidetuotteen polymorfikoostumuksen nopeaan määritykseen. Jauhemaisten näytteiden luokittelu eri polymorfeja sisältäviin näytteisiin voitiin tehdä käyttäen tarkoitukseen soveltuvia monimuuttujaisia luokittelumenetelmiä. Tämä tarjoaa nopean menetelmän jauhemaisen näytteen polymorfikoostumuksen karkeaan arviointiin, eli siihen mitä yksittäistä polymorfia kyseinen näyte pääasiassa sisältää. Varsinainen kvantitatiivinen analyysi, eli sen selvittäminen paljonko esim. painoprosentteina näyte sisältää eri polymorfeja, vaatii kaikki polymorfit kattavan fysikaalisen kalibrointisarjan, mikä voi olla puhtaiden polymorfien huonon saatavuuden takia hankalaa.
Resumo:
Static process simulation has traditionally been used to model complex processes for various purposes. However, the use of static processsimulators for the preparation of holistic examinations aiming at improving profit-making capability requires a lot of work because the production of results requires the assessment of the applicability of detailed data which may be irrelevant to the objective. The relevant data for the total assessment gets buried byirrelevant data. Furthermore, the models do not include an examination of the maintenance or risk management, and economic examination is often an extra property added to them which can be performed with a spreadsheet program. A process model applicable to holistic economic examinations has been developed in this work. The model is based on the life cycle profit philosophy developed by Hagberg and Henriksson in 1996. The construction of the model has utilized life cycle assessment and life cycle costing methodologies with a view to developing, above all, a model which would be applicable to the economic examinations of complete wholes and which would require the need for information focusing on aspects essential to the objectives. Life cycle assessment and costing differ from each other in terms of the modeling principles, but the features of bothmethodologies can be used in the development of economic process modeling. Methods applicable to the modeling of complex processes can be examined from the viewpoint of life cycle methodologies, because they involve the collection and management of large corpuses of information and the production of information for the needs of decision-makers as well. The results of the study shows that on the basis of the principles of life cycle modeling, a process model can be created which may be used to produce holistic efficiency examinations on the profit-making capability of the production line, with fewer resources thanwith traditional methods. The calculations of the model are based to the maximum extent on the information system of the factory, which means that the accuracyof the results can be improved by developing information systems so that they can provide the best information for this kind of examinations.
Resumo:
Tämän diplomityön tarkoituksena oli tehdä selvitys EDI:in liittyvistä vaikutuksista, tarpeista ja eduista sekä valmistella Oracle Applications- toiminnanohjausjärjestelmän EDI Gateway- modulin ottamista tuotantokäyttöön. Tietoa tarvekartoitukseen saatiin keskustelujen avulla. Uusia kaupallisista lähtökohdista johdettuja, yritysten väliseen kaupankäyntiin ja internet-teknologian hyödyntämiseen kehitettyjä aloitteita käsiteltiin EDI-näkökulmasta tulevaisuutta varten. Ajankohtaisinta tietoa tätä diplomityötä varten löydettiin myös internetistä. Tämän jälkeen oli mahdollista toteuttaa sopivan laaja mutta rajattu EDI pilottiprojekti EDI-konseptin luomista varten. EDI:n vaikutuksiin ostossa keskityttiin tässä diplomityössä enemmän ja EDI:ä päätettiin soveltaa aluksi ostotilauksissa. EDI:n hyötyjä on vaikea mitata numeerisesti. Suurta määrää rahaa tai tuoteyksiköitä on käsiteltävä EDI-partnerin kanssa riittävän usein. EDI:n käyttöönottovaiheessa pääongelmat ovat sovelluksiin liittyviä tietotekniikkaongelmia. Selvityksistä ja EDI-projektista saatu tieto on mahdollista hyödyntää jatkokehityksessä. Lisätoimenpiteitä tarvitaan kokonaan toimivan järjestelmän luomiseksi.
Resumo:
In this thesis the main objective is to examine and model configuration system and related processes. When and where configuration information is created in product development process and how it is utilized in order-delivery process? These two processes are the essential part of the whole configuration system from the information point of view. Empirical part of the work was done as a constructive research inside a company that follows a mass customization approach. Data models and documentation are created for different development stages of the configuration system. A base data model already existed for new structures and relations between these structures. This model was used as the basis for the later data modeling work. Data models include different data structures, their key objects and attributes, and relations between. Representation of configuration rules for the to-be configuration system was defined as one of the key focus point. Further, it is examined how the customer needs and requirements information can be integrated into the product development process. Requirements hierarchy and classification system is presented. It is shown how individual requirement specifications can be connected for physical design structure via features by developing the existing base data model further.
Resumo:
Med prediktion avses att man skattar det framtida värdet på en observerbar storhet. Kännetecknande för det bayesianska paradigmet är att osäkerhet gällande okända storheter uttrycks i form av sannolikheter. En bayesiansk prediktiv modell är således en sannolikhetsfördelning över de möjliga värden som en observerbar, men ännu inte observerad storhet kan anta. I de artiklar som ingår i avhandlingen utvecklas metoder, vilka bl.a. tillämpas i analys av kromatografiska data i brottsutredningar. Med undantag för den första artikeln, bygger samtliga metoder på bayesiansk prediktiv modellering. I artiklarna betraktas i huvudsak tre olika typer av problem relaterade till kromatografiska data: kvantifiering, parvis matchning och klustring. I den första artikeln utvecklas en icke-parametrisk modell för mätfel av kromatografiska analyser av alkoholhalt i blodet. I den andra artikeln utvecklas en prediktiv inferensmetod för jämförelse av två stickprov. Metoden tillämpas i den tredje artik eln för jämförelse av oljeprover i syfte att kunna identifiera den förorenande källan i samband med oljeutsläpp. I den fjärde artikeln härleds en prediktiv modell för klustring av data av blandad diskret och kontinuerlig typ, vilken bl.a. tillämpas i klassificering av amfetaminprover med avseende på produktionsomgångar.
Resumo:
The theoretical research of the study focused to business process management and business process modeling, the goal was to found a new business process modeling method for electrical accessories manufacturing enterprise. The focus was to find few options for business process modeling methods where company could have chosen the best one for its needs The study was carried out as a qualitative research with an action study and a case study as the most important ways collect data. In the empirical part of the study examples of company’s processes modeled with the new modeling method and process modeling process are presented. The new way of modeling processes improves especially visual presentation of the processes and improves the understanding how employees should work in the organizational interfaces of the process and in the interfaces between different processes. The results of the study is a new unified way to model company’s processes, which makes it easier to understand and create the process models. This improved readability makes it possible to reduce the costs that were created from the unclear old process models.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
The safe use of nuclear power plants (NPPs) requires a deep understanding of the functioning of physical processes and systems involved. Studies on thermal hydraulics have been carried out in various separate effects and integral test facilities at Lappeenranta University of Technology (LUT) either to ensure the functioning of safety systems of light water reactors (LWR) or to produce validation data for the computer codes used in safety analyses of NPPs. Several examples of safety studies on thermal hydraulics of the nuclear power plants are discussed. Studies are related to the physical phenomena existing in different processes in NPPs, such as rewetting of the fuel rods, emergency core cooling (ECC), natural circulation, small break loss-of-coolant accidents (SBLOCA), non-condensable gas release and transport, and passive safety systems. Studies on both VVER and advanced light water reactor (ALWR) systems are included. The set of cases include separate effects tests for understanding and modeling a single physical phenomenon, separate effects tests to study the behavior of a NPP component or a single system, and integral tests to study the behavior of the whole system. In the studies following steps can be found, not necessarily in the same study. Experimental studies as such have provided solutions to existing design problems. Experimental data have been created to validate a single model in a computer code. Validated models are used in various transient analyses of scaled facilities or NPPs. Integral test data are used to validate the computer codes as whole, to see how the implemented models work together in a code. In the final stage test results from the facilities are transferred to the NPP scale using computer codes. Some of the experiments have confirmed the expected behavior of the system or procedure to be studied; in some experiments there have been certain unexpected phenomena that have caused changes to the original design to avoid the recognized problems. This is the main motivation for experimental studies on thermal hydraulics of the NPP safety systems. Naturally the behavior of the new system designs have to be checked with experiments, but also the existing designs, if they are applied in the conditions that differ from what they were originally designed for. New procedures for existing reactors and new safety related systems have been developed for new nuclear power plant concepts. New experiments have been continuously needed.
Resumo:
The software development industry is constantly evolving. The rise of the agile methodologies in the late 1990s, and new development tools and technologies require growing attention for everybody working within this industry. The organizations have, however, had a mixture of various processes and different process languages since a standard software development process language has not been available. A promising process meta-model called Software & Systems Process Engineering Meta- Model (SPEM) 2.0 has been released recently. This is applied by tools such as Eclipse Process Framework Composer, which is designed for implementing and maintaining processes and method content. Its aim is to support a broad variety of project types and development styles. This thesis presents the concepts of software processes, models, traditional and agile approaches, method engineering, and software process improvement. Some of the most well-known methodologies (RUP, OpenUP, OpenMethod, XP and Scrum) are also introduced with a comparison provided between them. The main focus is on the Eclipse Process Framework and SPEM 2.0, their capabilities, usage and modeling. As a proof of concept, I present a case study of modeling OpenMethod with EPF Composer and SPEM 2.0. The results show that the new meta-model and tool have made it possible to easily manage method content, publish versions with customized content, and connect project tools (such as MS Project) with the process content. The software process modeling also acts as a process improvement activity.