904 resultados para encoding of measurement streams
Resumo:
PURPOSE: All kinds of blood manipulations aim to increase the total hemoglobin mass (tHb-mass). To establish tHb-mass as an effective screening parameter for detecting blood doping, the knowledge of its normal variation over time is necessary. The aim of the present study, therefore, was to determine the intraindividual variance of tHb-mass in elite athletes during a training year emphasizing off, training, and race seasons at sea level. METHODS: tHb-mass and hemoglobin concentration ([Hb]) were determined in 24 endurance athletes five times during a year and were compared with a control group (n = 6). An analysis of covariance was used to test the effects of training phases, age, gender, competition level, body mass, and training volume. Three error models, based on 1) a total percentage error of measurement, 2) the combination of a typical percentage error (TE) of analytical origin with an absolute SD of biological origin, and 3) between-subject and within-subject variance components as obtained by an analysis of variance, were tested. RESULTS: In addition to the expected influence of performance status, the main results were that the effects of training volume (P = 0.20) and training phases (P = 0.81) on tHb-mass were not significant. We found that within-subject variations mainly have an analytical origin (TE approximately 1.4%) and a very small SD (7.5 g) of biological origin. CONCLUSION: tHb-mass shows very low individual oscillations during a training year (<6%), and these oscillations are below the expected changes in tHb-mass due to Herythropoetin (EPO) application or blood infusion (approximately 10%). The high stability of tHb-mass over a period of 1 year suggests that it should be included in an athlete's biological passport and analyzed by recently developed probabilistic inference techniques that define subject-based reference ranges.
Resumo:
Résumé Si l'impact de l'informatique ne fait généralement pas de doute, il est souvent plus problématique d'en mesurer sa valeur. Les Directeurs des Systèmes d'Information (DSI) expliquent l'absence de schéma directeur et de vision à moyen et long terme de l'entreprise, par un manque de temps et de ressources mais aussi par un défaut d'implication des directions générales et des directions financières. L'incapacité de mesurer précisément la valeur du système d'information engendre une logique de gestion par les coûts, néfaste à l'action de la DSI. Alors qu'une mesure de la valeur économique de l'informatique offrirait aux directions générales la matière leur permettant d'évaluer réellement la maturité et la contribution de leur système d'information. L'objectif de cette thèse est d'évaluer à la fois l'alignement de l'informatique avec la stratégie de l'entreprise, la qualité du pilotage (mesure de performance) des systèmes d'information, et enfin, l'organisation et le positionnement de la fonction informatique dans l'entreprise. La mesure de ces trois éléments clés de la gouvernance informatique a été réalisée par l'intermédiaire de deux vagues d'enquêtes successives menées en 2000/2001 (DSI) et 2002/2003 (DSI et DG) en Europe francophone (Suisse Romande, France, Belgique et Luxembourg). Abstract The impact of Information Technology (IT) is today a clear evidence to company stakeholders. However, measuring the value generated by IT is a real challenge. Chief Information Officers (CIO) explain the absence of solid IT Business Plans and clear mid/long term visions by a lack of time and resources but also by a lack of involvement of business senior management (e.g. CEO and CFO). Thus, being not able to measure the economic value of IT, the CIO will have to face the hard reality of permanent cost pressures and cost reductions to justify IT spending and investments. On the other side, being able to measure the value of IT would help CIO and senior business management to assess the maturity and the contribution of the Information System and therefore facilitate the decision making process. The objective of this thesis is to assess the alignment of IT with the business strategy, to assess the quality of measurement of the Information System and last but not least to assess the positioning of the IT organisation within the company. The assessment of these three key elements of the IT Governance was established with two surveys (first wave in 2000/2001 for CIO, second wave in 2002/2003 for CIO and CEO) in Europe (French speaking countries namely Switzerland, France, Belgium and Luxembourg).
Resumo:
El treball avalua amb PCL:SV 18 subjectes imputats per delictes de robatoris amb intimidació, homicidi, violació en sèrie, assassinat, lesions, assetjament sexual. Es mostren els resultats descriptius de la PCL:SV aplicada: puntuacions, mitjanes, distribució de mesura.
Resumo:
Given the important role of the shoulder sensorimotor system in shoulder stability, its assessment appears of interest. Force platform monitoring of centre of pressure (CoP) in upper-limb weight-bearing positions is of interest as it allows integration of all aspects of shoulder sensorimotor control. This study aimed to determine the feasibility and reliability of shoulder sensorimotor control assessment by force platform. Forty-five healthy subjects performed two sessions of CoP measurement using Win-Posturo(®) Medicapteurs force platform in an upper-limb weight-bearing position with the lower limbs resting on a table to either the anterior superior iliac spines (P1) or upper patellar poles (P2). Four different conditions were tested in each position in random order: eyes open or eyes closed with trunk supported by both hands and eyes open with trunk supported on the dominant or non-dominant side. P1 reliability values were globally moderate to high for CoP length, CoP velocity and CoP standard deviation (SD), standard error of measurement ranged from 6·0% to 26·5%, except for CoP area. P2 reliability values were globally low and not clinically acceptable. Our results suggest that shoulder sensorimotor control assessment by force platform is feasible and has good reliability in upper-limb weight-bearing positions when the lower limbs are resting on a table to the anterior superior iliac spines. CoP length, CoP velocity and CoP SD velocity appear to be the most reliable variables.
Resumo:
Deduction allows us to draw consequences from previous knowledge. Deductive reasoning can be applied to several types of problem, for example, conditional, syllogistic, and relational. It has been assumed that the same cognitive operations underlie solutions to them all; however, this hypothesis remains to be tested empirically. We used event-related fMRI, in the same group of subjects, to compare reasoning-related activity associated with conditional and syllogistic deductive problems. Furthermore, we assessed reasoning-related activity for the two main stages of deduction, namely encoding of premises and their integration. Encoding syllogistic premises for reasoning was associated with activation of BA 44/45 more than encoding them for literal recall. During integration, left fronto-lateral cortex (BA 44/45, 6) and basal ganglia activated with both conditional and syllogistic reasoning. Besides that, integration of syllogistic problems additionally was associated with activation of left parietal (BA 7) and left ventro-lateral frontal cortex (BA 47). This difference suggests a dissociation between conditional and syllogistic reasoning at the integration stage. Our finding indicates that the integration of conditional and syllogistic reasoning is carried out by means of different, but partly overlapping, sets of anatomical regions and by inference, cognitive processes. The involvement of BA 44/45 during both encoding (syllogisms) and premise integration (syllogisms and conditionals) suggests a central role in deductive reasoning for syntactic manipulations and formal/linguistic representations.
Resumo:
We did a subject-level meta-analysis of the changes (Δ) in blood pressure (BP) observed 3 and 6 months after renal denervation (RDN) at 10 European centers. Recruited patients (n=109; 46.8% women; mean age 58.2 years) had essential hypertension confirmed by ambulatory BP. From baseline to 6 months, treatment score declined slightly from 4.7 to 4.4 drugs per day. Systolic/diastolic BP fell by 17.6/7.1 mm Hg for office BP, and by 5.9/3.5, 6.2/3.4, and 4.4/2.5 mm Hg for 24-h, daytime and nighttime BP (P0.03 for all). In 47 patients with 3- and 6-month ambulatory measurements, systolic BP did not change between these two time points (P0.08). Normalization was a systolic BP of <140 mm Hg on office measurement or <130 mm Hg on 24-h monitoring and improvement was a fall of 10 mm Hg, irrespective of measurement technique. For office BP, at 6 months, normalization, improvement or no decrease occurred in 22.9, 59.6 and 22.9% of patients, respectively; for 24-h BP, these proportions were 14.7, 31.2 and 34.9%, respectively. Higher baseline BP predicted greater BP fall at follow-up; higher baseline serum creatinine was associated with lower probability of improvement of 24-h BP (odds ratio for 20-μmol l(-1) increase, 0.60; P=0.05) and higher probability of experiencing no BP decrease (OR, 1.66; P=0.01). In conclusion, BP responses to RDN include regression-to-the-mean and remain to be consolidated in randomized trials based on ambulatory BP monitoring. For now, RDN should remain the last resort in patients in whom all other ways to control BP failed, and it must be cautiously used in patients with renal impairment.
Resumo:
A review of nearly three decades of cross-cultural research shows that this domain still has to address several issues regarding the biases of data collection and sampling methods, the lack of clear and consensual definitions of constructs and variables, and measurement invariance issues that seriously limit the comparability of results across cultures. Indeed, a large majority of the existing studies are still based on the anthropological model, which compares two cultures and mainly uses convenience samples of university students. This paper stresses the need to incorporate a larger variety of regions and cultures in the research designs, the necessity to theorize and identify a larger set of variables in order to describe a human environment, and the importance of overcoming methodological weaknesses to improve the comparability of measurement results. Cross-cultural psychology is at the next crossroads in it's development, and researchers can certainly make major contributions to this domain if they can address these weaknesses and challenges.
Roadway Lighting and Safety: Phase II – Monitoring Quality, Durability and Efficiency, November 2011
Resumo:
This Phase II project follows a previous project titled Strategies to Address Nighttime Crashes at Rural, Unsignalized Intersections. Based on the results of the previous study, the Iowa Highway Research Board (IHRB) indicated interest in pursuing further research to address the quality of lighting, rather than just the presence of light, with respect to safety. The research team supplemented the literature review from the previous study, specifically addressing lighting level in terms of measurement, the relationship between light levels and safety, and lamp durability and efficiency. The Center for Transportation Research and Education (CTRE) teamed with a national research leader in roadway lighting, Virginia Tech Transportation Institute (VTTI) to collect the data. An integral instrument to the data collection efforts was the creation of the Roadway Monitoring System (RMS). The RMS allowed the research team to collect lighting data and approach information for each rural intersection identified in the previous phase. After data cleanup, the final data set contained illuminance data for 101 lighted intersections (of 137 lighted intersections in the first study). Data analysis included a robust statistical analysis based on Bayesian techniques. Average illuminance, average glare, and average uniformity ratio values were used to classify quality of lighting at the intersections.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Roadway Lighting and Safety: Phase II – Monitoring Quality, Durability and Efficiency, November 2011
Resumo:
This Phase II project follows a previous project titled Strategies to Address Nighttime Crashes at Rural, Unsignalized Intersections. Based on the results of the previous study, the Iowa Highway Research Board (IHRB) indicated interest in pursuing further research to address the quality of lighting, rather than just the presence of light, with respect to safety. The research team supplemented the literature review from the previous study, specifically addressing lighting level in terms of measurement, the relationship between light levels and safety, and lamp durability and efficiency. The Center for Transportation Research and Education (CTRE) teamed with a national research leader in roadway lighting, Virginia Tech Transportation Institute (VTTI) to collect the data. An integral instrument to the data collection efforts was the creation of the Roadway Monitoring System (RMS). The RMS allowed the research team to collect lighting data and approach information for each rural intersection identified in the previous phase. After data cleanup, the final data set contained illuminance data for 101 lighted intersections (of 137 lighted intersections in the first study). Data analysis included a robust statistical analysis based on Bayesian techniques. Average illuminance, average glare, and average uniformity ratio values were used to classify quality of lighting at the intersections.
Resumo:
Stream channel erosion in the deep loess soils region of western Iowa causes severe damage along hundreds of miles of streams in twenty-two counties. The goal of this project was to develop information, systems, and procedures for use in making resource allocation decisions related to the protection of transportation facilities and farmland from damages caused by stream channel erosion. Section one of this report provides an introduction. Section two presents an assessment of stream channel conditions from aerial and field reconnaissance conducted in 1993 and 1994 and a classification of the streams based on a six stage model of stream channel evolution. A Geographic Information System is discussed that has been developed to store and analyze data on the stream conditions and affected infrastructure and assist in the planning of stabilization measures. Section three presents an evaluation of two methods for predicting the extent of channel degradation. Section four presents an estimate of costs associated with damages from stream channel erosion since the time of channelization until 1992. Damage to highway bridges represent the highest costs associated with channel erosion, followed by railroad bridges and right-of-way; loss of agricultural land represents the third highest cost. An estimate of costs associated with future channel erosion on western Iowa streams is also presented in section four. Section four also presents a procedure to estimate the benefits and costs of implementing stream stabilization measures. The final section of this report, section five, presents information on the development of the organizational structure and administrative procedures which are being used to plan, coordinate, and implement stream stabilization projects and programs in western Iowa.
Resumo:
Man’s never-ending search for better materials and construction methods and for techniques of analysis and design has overcome most of the early difficulties of bridge building. Scour of the stream bed, however, has remained a major cause of bridge failures ever since man learned to place piers and abutments in the stream in order to cross wide rivers. Considering the overall complexity of field conditions, it is not surprising that no generally accepted principles (not even rules of thumb) for the prediction of scour around bridge piers and abutments have evolved from field experience alone. The flow of individual streams exhibits a manifold variation, and great disparity exists among different rivers. The alignment, cross section, discharge, and slope of a stream must all be correlated with the scour phenomenon, and this in turn must be correlated with the characteristics of the bed material ranging from clays and fine silts to gravels and boulders. Finally, the effect of the shape of the obstruction itself-the pier or abutment-must be assessed. Since several of these factors are likely to vary with time to some degree, and since the scour phenomenon as well is inherently unsteady, sorting out the influence of each of the various factors is virtually impossible from field evidence alone. The experimental approach was chosen as the investigative method for this study, but with due recognition of the importance of field measurements and with the realization that the results must be interpreted so as to be compatible with the present-day theories of fluid mechanics and sediment transportation. This approach was chosen because, on the one hand, the factors affecting the scour phenomenon can be controlled in the laboratory to an extent that is not possible in the field, and, on the other hand, the model technique can be used to circumvent the present inadequate understanding of the phenomenon of the movement of sediment by flowing water. In order to obtain optimum results from the laboratory study, the program was arranged at the outset to include a related set of variables in each of several phases into which the whole problem was divided. The phases thus selected were : 1. Geometry of piers and abutments, 2. Hydraulics of the stream, 3. Characteristics of the sediment, 4. Geometry of channel shape and alignment.
Resumo:
Yritysten toimintaympäristössä on tapahtunut viime vuosina suuria muutoksia ja niitä tapahtuu tulevaisuudessakin yhä nopeammassa tahdissa. Yritysten on kehitettävä suorituskykyään, jotta sidosryhmien tarpeet kyetään tyydyttämään. Suorituskyvyn kehittäminen perustuu mittaustuloksiin ja niistä tehtäviin johtopäätöksiin. Hävikin mittaaminen ja analysointi on osa yrityksen suorituskykyä. Mittarit, joita käytetään suorituskyvyn mittaamiseen, tulee olla luotettavia. Mittaustuloksia tulee pystyä hyödyntämään tehokkaasti. Makeisteollisuudessa hävikkimärät ovat jatkuvasti kasvaneet. Tämä oli heräte tälle tutkimukselle yrittää löytää paras tapa mitata hävikkiä. Toinen tavoite oli selvittää, kuinka olisi mahdollista vähentää hävikin syntymistä. Näiden tavoitteiden saavuttaminen vaati selvitystä, kuinka hävikkiä mitataan tänä päivänä, mistä hävikki syntyy ja kuinka siihen reagoidaan. Tutkimuksessa mukana olevassa yrityksessä on laaja valikoima erilaisia makeistuotteita, sokerimakeisia ja suklaatuotteita. Tähän tutkimukseen valittiin saman tyyppiset valmistuslinjat jokaiselta tehtaalta. Näiltä linjoilta valittiin tutkittavaksi kaksi erilaista tuotetta. Tämän tutkimuksen tulos oli, että hävikkiä käsitellään jamitataan hyvin eri tavalla jokaisella tehtaalla. Hävikkiä mitataan juuri, kutenkirjallisuudessa on mainittu verraten myyntiin tai valmistettuihin kiloihin. Näin ollen ei ollut tarvetta muuttaa mittausmenetelmää, joka oli tutkimuksen päätavoite. Tärkeintä oli löytää kohdat, joissa hävikkiä syntyy. Ajan puutteen vuoksiei ollut mahdollista selvittää kaikkia hävikin syntyyn vaikuttavia syitä tai paikkoja, joissa hävikkiä syntyy. Suurin syy hävikkiin on huono laatu. Tähän syynäon osaamisen ja koulutuksen puute. Syynä hävikin syntyyn voi olla myös raaka-aineissa tai resepteissä. Tässä tutkimuksessa käytettiin Bill Crosbyn kirjaa 'Laatu on ilmaista'. Kirjassa käsitellään -14 kohdan ohjelmaa', joka voisi olla hyvä ohjelma nostaa tuotteiden laatutasoa myös tutkimuksessa mukana olevassa yrityksessä.On tärkeää tarkentaa säännöt, kuinka kirjataan hävikkiä, tarkistaa reseptit ja koulutuksen avulla nostaa osaamisen tasoa. Tällaiselle projektille täytyy olla koko organisaation ja ylimmän johdon tuki. Kaikki ne miljoonat eurot, jotka heitetään pois, voisivat olla tuloksessa. Se ei tapahdu yhdessä yössä, vaan vaatiijatkuvaa työtä tavoitteen saavuttamiseen.
Resumo:
Uudistunut ympäristölainsäädäntö vaatii energiantuotantolaitoksilta yhä enemmän järjestelmällistä ympäristötiedon hallintaa. LCP- ja jätteenpolttoasetuksen velvoitteet ovat asettaneet uusia vaatimuksia päästöjen valvontaan ja siihen käytettävien mittausjärjestelmien laadunvarmennukseen sekä päästötietojen raportointiin. Uudistukset ovat lisänneet huomattavasti laitoksilla ympäristötiedon käsittelyyn kuluvaa aikaa. Laitosten toimintaehdot määritellään ympäristöviranomaisen myöntämässä ympäristöluvassa, joka on tärkein yksittäinen laitoksen toimintaa ohjaava tekijä. Tämän lisäksi monet toimijat haluavat parantaa ympäristöasioiden tasoaan vapaaehtoisilla ympäristöjärjestelmillä. Tässä diplomityössä kuvataan energiantuotantolaitosten ympäristöasioiden tallentamiseen ja hallintaan kehitetty selainpohjainen Metso Automationin DNAecoDiary'sovellus. Työ on rajattu koskemaan Suomessa toimivia LCP- ja/tai jätteenpolttoasetuksen alaisia laitoksia. Sovelluksen avulla voidaan varmistaa energiantuotantolaitosten poikkeamien, häiriöilmoitusten, päästömittalaitteisiin liittyvien tapahtumien ja muun ympäristöasioiden valvontaan liittyvän informaation tehokas hallinta. Sovellukseen tallennetaan ympäristötapahtumiin liittyvät perustiedot sekä etenkin käyttäjien tapahtumiin liittyvä kokemustietämys. Valvontakirjaukseen voidaan liittää tapahtuman perustietojen lisäksi myös tiedostoja ja kuvia. Sovellusta ja sillä kerättyä tietoa voidaan hyödyntää laitoksella käsilläolevien ongelmien ratkaisuun, ympäristötapahtumien todentamiseen sekä ympäristöraporttien laadintaan. Kehitystyön tueksi järjestettiin asiakastarvekartoitus, jonka perusteella ideoitiin sovelluksen ominaisuuksia. Tässä työssä on esitetty ympäristötiedon hallinan perusteet, selvitetty DNAecoDiaryn toimintaperiaatteet ja annettu esimerkkejä sen hyödyntämisestä. Sovelluksen lopullinen sisältö määritellään kunkin asiakkaan ympäristöluvan ja oma-valvonnan tarpeiden mukaisesti. Sovellus toimii itsenäisesti tai osana laajempaa Metso Automationin päästöjenhallinta- ja raportointisovelluskokonaisuutta.
Resumo:
Henkilöstö on yrityksen yksi tärkeimmistä menestystekijöistä ja siksi on tärkeää, että työntekijät ovat sitoutuneita, motivoituneita ja toimivat tehokkaasti. Työn tavoitteena on kehittää organisaatioiden käyttöön suorituskyvyn menestyksellisen ohjaamisen työkalu, jolla voidaan mitata työmotivaation ja toiminnan tehokkuuden taustalla olevia tekijöitä. Kirjallisuudesta, aiemmista tutkimuksista ja haastatteluista selvisi viisi osa-aluetta, jotka vaikuttavat positiivisesti henkilöstön toiminnan tehokkuuteen, työmotivaation ja sitoutumiseen. Osa-alueet ovat tavoitteiden asettaminen, viestintä, henkilöstön vaikutusmahdollisuudet, palkitseminen ja motivointi sekä koulutukseen ja työympäristöön liittyvät tekijät. Jokaisen osa-alueen alle on kerätty kysymyksiä, jotka ovat jaettu kahteen osaan: avainkysymykset ja omavalinnaiset kysymykset. Avainkysymykset ovat sellaisia, joita suositellaan otettavaksi mukaan mittaukseen ja omavalinnaisten kysymysten valinta jää organisaation itsensä päätettäväksi. Näin saadaan organisaatioille mahdollisuus räätälöidä työkalu tarpeisiinsa ja strategiaan sopiviksi. Tutkimukseen empiirinen aineisto kerättiin haastattelemalla. Osallistuneista organisaatioista kaksi oli julkiselta sektorilta ja kuusi yksityiseltä sektorilta. Yksityisen sektorin yritykset koostuvat pienistä, keskisuurista ja suurista yrityksistä. Haastatteluilla kerättiin tietoa työkalun toteutusta ja sisältöä varten. SUMO-kartoituksesta saatuja mittaustuloksia organisaatio voi hyödyntää monella eri tavalla. Esimerkiksi organisaatio voi nähdä osa-alueiden ja toiminnan tehokkuuden kehitystarpeita sekä kuinka henkilöstön toimintaa voidaan tehostaa. Suorituskyvyn mittausjärjestelmän käytöllä on myös positiivisia vaikutuksia mittausosa-alueisiin.