888 resultados para Web modelling methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a review of theories of the so-called post-industrial society, and proposes that the concept of post-industrial society can be used to understand the recent developments of the World Wide Web, often described as Web 2.0 or social Web. The study combines theories ranging from post-war management science and cultural studies to software development, and tries to build a holistic view of the development of the post-industrial society, and especially the Internet. The discourse on the emergence of a post-industrial society after the World Wars has addressed the ways in which the growing importance of information, and innovations in digital communications technology, are changing our society. It is furthermore deeply connected with the discourse on the postmodern society, which emphasizes cultural fragmentation, intertextuality, and pluralism. The Internet age is characterized by increasing masses of information that are managed through various technologies. While 1990s Internet technologies often used the network as a traditional broadcasting channel with added interactivity, Web 2.0 technologies are specifically designed to utilize the network model by facilitating communication between various services and devices, and analyzing the relationships between users and objects in order to produce intelligent insight. The wide adoption of the Internet, and recently of Internet-enabled mobile devices, is furthermore continuously producing new ways of communicating, consuming, and producing. Applications of the social Web, such as social media or social networking services, are permanently changing our traditional social, cultural, and economic practices. The study first presents an overview of the post-industrial society, the Internet, and the concept of Web 2.0. Then the concept of social Web is described with an analysis of the term social media, the brief histories of the interactive Web and social networking services, and a description of the concept ―long tail‖, used to represent the masses of information available in the Web that do not receive mainstream attention. Finally, methods for retrieving and filtering information, modeling social and cultural relationships, and communicating with customers, are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IT-järjestelmillä on tärkeä rooli organisaation liiketoiminnassa. Koska organisaation liiketoimintavaatimukset ja strategia muuttuvat ympäröivän maailman mukaan, täytyy järjestelmän arkkitehtuurin sopeutua vallitsevaan tilanteeseen sekä mahdollisiin muutoksiin lyhyellä ja pitkällä aikavälillä. Modernin web-sovelluksen arkkitehtuuri sopeutuu organisaation liiketoiminnan haasteisiin. Erityisesti hallinnolliseksi ongelmaksi organisaatiossa muodostuvat Windows-sovellukset, koska niiden ylläpito sitoo henkilöresursseja ja niiden käyttökonteksti on rajallinen. Tästä syystä organisaatiot ovat käyneet etsimään ratkaisuja kuinka korvata Windows-sovellukset web-sovelluksilla. Kustannustehokas ratkaisu on modernisoida Windows-sovelluksen käyttöliittymä web-sovellukseksi. Tämän diplomityön tavoitteena oli laatia Logica Suomi Oy yritykselle viitearkkitehtuuri Win-dows-sovelluksen käyttöliittymän modernisoimiseksi web-sovellukseksi. Työ suoritettiin Proof of Concept projektissa, jossa modernisointiin Logican pääkäyttäjäsovellus. Työn tarkoituksena oli tunnistaa laajalti käytetyt arkkitehtuurimallit ja menetelmät jotka mahdollistavat modernisoinnin toteutuksen. Lisäksi tarkoitus oli tunnistaa menetelmät ja ohjelmistot jotka mahdollistavat kustannustehokkaan ja laadukkaan web-sovelluksen kehittämisen ja toteuttamisen. Työn osatavoitteena oli laatia modernisoitavan pääkäyttäjäsovelluksen kokonaisarkkitehtuuri. Työn tuloksena saatiin viitearkkitehtuuri jota voidaan käyttää ja hyödyntää ohjelmistokehitysprojekteissa, asiakkaan dokumentaatiossa, myynnissä ja markkinoinnissa. Viitearkkitehtuurissa on esitelty modernit web-teknologiat joilla on mahdollista toteuttaa web-sovellus jonka käyttökokemus vastaa Windows-sovellusta. Lisäksi tuloksena saatiin pääkäyttäjäsovelluksen kokonaisarkkitehtuuri, jonka tärkeimpiä tuloksia ovat modernisoinnin tavoitetila ja sovellusarkkitehtuuri. Tärkeimpiä jatkotoimenpiteitä ovat viitearkkitehtuuriin pohjautuvan modernisointiviitekehyksen laadinta sekä modernisointiprojektin arviointiin käytettävien mittareiden määrittely, suunnittelu ja toteutus. Relevanttien mittareiden avulla voidaan todeta, vastaako modernisoitu sovellus organisaation liiketoimintavaatimuksia ja strategiaa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Potentiometric ion sensors are a very important subgroup of electrochemical sensors, very attractive for practical applications due to their small size, portability, low-energy consumption, relatively low cost and not changing the sample composition. They are investigated by the researchers from many fields of science. The continuous development of this field creates the necessity for a detailed description of sensor response and the electrochemical processes important in the practical applications of ion sensors. The aim of this thesis is to present the existing models available for the description of potentiometric ion sensors as well as their applicability and limitations. This includes the description of the diffusion potential occurring at the reference electrodes. The wide range of existing models, from most idealised phase boundary models to most general models, including migration, is discussed. This work concentrates on the advanced modelling of ion sensors, namely the Nernst-Planck-Poisson (NPP) model, which is the most general of the presented models, therefore the most widely applicable. It allows the modelling of the transport processes occurring in ion sensors and generating the potentiometric response. Details of the solution of the NPP model (including the numerical methods used) are shown. The comparisons between NPP and the more idealized models are presented. The applicability of the model to describe the formation of diffusion potential in reference electrode, the lower detection limit of both ion-exchanger and neutral carrier electrodes and the effect of the complexation in the membrane are discussed. The model was applied for the description of both types of electrodes, i.e. with the inner filling solution and solidcontact electrodes. The NPP model allows the electrochemical methods other than potentiometry to be described. Application of this model in Electrochemical Impedance Spectroscopy is discussed and a possible use in chrono-potentiometry is indicated. By combining the NPP model with evolutionary algorithms, namely Hierarchical Genetic Strategy (HGS), a novel method allowing the facilitation of the design of ion sensors was created. It is described in detail in this thesis and its possible applications in the field of ion sensors are indicated. Finally, some interesting effects occurring in the ion sensors (i.e. overshot response and influence of anionic sites) as well as the possible applications of NPP in biochemistry are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the kinematic study of robotic biped locomotion systems. The main purpose is to determine the kinematic characteristics and the system performance during walking. For that objective, the prescribed motion of the biped is completely characterised in terms of five locomotion variables: step length, hip height, maximum hip ripple, maximum foot clearance and link lengths. In this work, we propose four methods to quantitatively measure the performance of the walking robot: energy analysis, perturbation analysis, lowpass frequency response and locomobility measure. These performance measures are discussed and compared in determining the robustness and effectiveness of the resulting locomotion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing complexity of controller systems, applied in modern passenger cars, requires adequate simulation tools. The toolset FASIM_C++, described in the following, uses complex vehicle models in three-dimensional vehicle dynamics simulation. The structure of the implemented dynamic models and the generation of the equations of motion applying the method of kinematic differentials is explained briefly. After a short introduction in methods of event handling, several vehicle models and applications like controller development, roll-over simulation and real-time-simulation are explained. Finally some simulation results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work is to obtain a better understanding of behaviour of possible ultrasound appliance on fluid media mixing. The research is done in the regard to Newtonian and non-Newtonian fluids. The process of ultrasound appliance on liquids is modelled in COMSOL Multiphysics software. The influence of ultrasound using is introduced as waveform equation. Turbulence modelling is fulfilled by the k-ε model in Newtonian fluid. The modeling of ultrasound assisted mixing in non-Newtonian fluids is based on the power law. To verify modelling results two practical methods are used: Particle Image Velocimetry and measurements of mixing time. Particle Image Velocimetry allows capturing of velocity flow field continuously and presents detailed depiction of liquid dynamics. The second way of verification is the comparison of mixing time of homogeneity. Experimentally achievement of mixing time is done by conductivity measurements. In modelling part mixing time is achieved by special module of COMSOL Multiphysics – the transport of diluted species. Both practical and modelling parts show similar radial mechanism of fluid flow under ultrasound appliance – from the horn tip fluid moves to the bottom and along the walls goes back. Velocity profiles are similar in modelling and experimental part in the case of Newtonian fluid. In the case of non-Newtonian fluid velocity profiles do not agree. The development track of ultrasound-assisted mixing modelling is presented in the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Web services have been gaining popularity due to the success of service oriented architecture and cloud computing. Web services offer tremendous opportunity for service developers to publish their services and applications over the boundaries of the organization or company. However, to fully exploit these opportunities it is necessary to find efficient discovery mechanism thus, Web services discovering mechanism has attracted a considerable attention in Semantic Web research, however, there have been no literature surveys that systematically map the present research result thus overall impact of these research efforts and level of maturity of their results are still unclear. This thesis aims at providing an overview of the current state of research into Web services discovering mechanism using systematic mapping. The work is based on the papers published 2004 to 2013, and attempts to elaborate various aspects of the analyzed literature including classifying them in terms of the architecture, frameworks and methods used for web services discovery mechanism. Objective: The objective if this work is to summarize the current knowledge that is available as regards to Web service discovery mechanisms as well as to systematically identify and analyze the current published research works in order to identify different approaches presented. Method: A systematic mapping study has been employed to assess the various Web Services discovery approaches presented in the literature. Systematic mapping studies are useful for categorizing and summarizing the level of maturity research area. Results: The result indicates that there are numerous approaches that are consistently being researched and published in this field. In terms of where these researches are published, conferences are major contributing publishing arena as 48% of the selected papers were conference published papers illustrating the level of maturity of the research topic. Additionally selected 52 papers are categorized into two broad segments namely functional and non-functional based approaches taking into consideration architectural aspects and information retrieval approaches, semantic matching, syntactic matching, behavior based matching as well as QOS and other constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Successful management of rivers requires an understanding of the fluvial processes that govern them. This, in turn cannot be achieved without a means of quantifying their geomorphology and hydrology and the spatio-temporal interactions between them, that is, their hydromorphology. For a long time, it has been laborious and time-consuming to measure river topography, especially in the submerged part of the channel. The measurement of the flow field has been challenging as well, and hence, such measurements have long been sparse in natural environments. Technological advancements in the field of remote sensing in the recent years have opened up new possibilities for capturing synoptic information on river environments. This thesis presents new developments in fluvial remote sensing of both topography and water flow. A set of close-range remote sensing methods is employed to eventually construct a high-resolution unified empirical hydromorphological model, that is, river channel and floodplain topography and three-dimensional areal flow field. Empirical as well as hydraulic theory-based optical remote sensing methods are tested and evaluated using normal colour aerial photographs and sonar calibration and reference measurements on a rocky-bed sub-Arctic river. The empirical optical bathymetry model is developed further by the introduction of a deep-water radiance parameter estimation algorithm that extends the field of application of the model to shallow streams. The effect of this parameter on the model is also assessed in a study of a sandy-bed sub-Arctic river using close-range high-resolution aerial photography, presenting one of the first examples of fluvial bathymetry modelling from unmanned aerial vehicles (UAV). Further close-range remote sensing methods are added to complete the topography integrating the river bed with the floodplain to create a seamless high-resolution topography. Boat- cart- and backpack-based mobile laser scanning (MLS) are used to measure the topography of the dry part of the channel at a high resolution and accuracy. Multitemporal MLS is evaluated along with UAV-based photogrammetry against terrestrial laser scanning reference data and merged with UAV-based bathymetry to create a two-year series of seamless digital terrain models. These allow the evaluation of the methodology for conducting high-resolution change analysis of the entire channel. The remote sensing based model of hydromorphology is completed by a new methodology for mapping the flow field in 3D. An acoustic Doppler current profiler (ADCP) is deployed on a remote-controlled boat with a survey-grade global navigation satellite system (GNSS) receiver, allowing the positioning of the areally sampled 3D flow vectors in 3D space as a point cloud and its interpolation into a 3D matrix allows a quantitative volumetric flow analysis. Multitemporal areal 3D flow field data show the evolution of the flow field during a snow-melt flood event. The combination of the underwater and dry topography with the flow field yields a compete model of river hydromorphology at the reach scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rough turning is an important form of manufacturing cylinder-symmetric parts. Thus far, increasing the level of automation in rough turning has included process monitoring methods or adaptive turning control methods that aim to keep the process conditions constant. However, in order to improve process safety, quality and efficiency, an adaptive turning control should be transformed into an intelligent machining system optimizing cutting values to match process conditions or to actively seek to improve process conditions. In this study, primary and secondary chatter and chip formation are studied to understand how to measure the effect of these phenomena to the process conditions and how to avoid undesired cutting conditions. The concept of cutting state is used to address the combination of these phenomena and the current use of the power capacity of the lathe. The measures to the phenomena are not developed based on physical measures, but instead, the severity of the measures is modelled against expert opinion. Based on the concept of cutting state, an expert system style fuzzy control system capable of optimizing the cutting process was created. Important aspects of the system include the capability to adapt to several cutting phenomena appearing at once, even if the said phenomena would potentially require conflicting control action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nykypäivän monimutkaisessa ja epävakaassa liiketoimintaympäristössä yritykset, jotka kykenevät muuttamaan tuottamansa operatiivisen datan tietovarastoiksi, voivat saavuttaa merkittävää kilpailuetua. Ennustavan analytiikan hyödyntäminen tulevien trendien ennakointiin mahdollistaa yritysten tunnistavan avaintekijöitä, joiden avulla he pystyvät erottumaan kilpailijoistaan. Ennustavan analytiikan hyödyntäminen osana päätöksentekoprosessia mahdollistaa ketterämmän, reaaliaikaisen päätöksenteon. Tämän diplomityön tarkoituksena on koota teoreettinen viitekehys analytiikan mallintamisesta liike-elämän loppukäyttäjän näkökulmasta ja hyödyntää tätä mallinnusprosessia diplomityön tapaustutkimuksen yritykseen. Teoreettista mallia hyödynnettiin asiakkuuksien mallintamisessa sekä tunnistamalla ennakoivia tekijöitä myynnin ennustamiseen. Työ suoritettiin suomalaiseen teollisten suodattimien tukkukauppaan, jolla on liiketoimintaa Suomessa, Venäjällä ja Balteissa. Tämä tutkimus on määrällinen tapaustutkimus, jossa tärkeimpänä tiedonkeruumenetelmänä käytettiin tapausyrityksen transaktiodataa. Data työhön saatiin yrityksen toiminnanohjausjärjestelmästä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn teoriaosuudessa tutkittiin prosessien uudelleen suunnittelua, prosessien mallintamista sekä prosessimittariston rakentamista. Työn tavoitteena oli uudelleen suunnitella organisaation sertifiointiprosessi. Tämän tavoitteen saavuttamiseksi piti mallintaa nykyinen ja uusi prosessi sekä rakentaa mittaristo, joka antaisi organisaatiolle arvokasta tietoa siitä, kuinka tehokkaasti uusi prosessi toimii. Työ suoritettiin osallistuvana toimintatutkimuksena. Diplomityön tekijä oli toiminut kohdeorganisaatiossa työntekijänä jo useita vuosia ja pystyi näinollen hyödyntämään omaa tietämystään sekä nykyisen prosessin mallintamisessa, että uuden prosessin suunnittelussa. Työn tuloksena syntyi uusi sertifiointiprosessi, joka on karsitumpi ja tehokkaampi kuin edeltäjänsä. Uusi mittaristojärjestelmä rakennettiin, jota organisaation johto kykenisi seuraamaan prosessin sidosryhmien tehokkuutta sekä tuotteiden laadun kehitystä. Sivutuotteena organisaatio sai käyttöönsä yksityiskohtaiset prosessikuvaukset, joita voidaan hyödyntää koulutusmateriaalina uutta henkilöstöä rekrytoitaessa sekä informatiivisena työkaluna esiteltäessä prosessia virallisille sertifiointitahoille.