397 resultados para analysts
Resumo:
The assessment of Latin American long term economic performance is in urgent need ofmobilizing more data to match the pressing demands of growth analysts. We present asystematic comparison of capital goods imports for 20 Latin American countries in 1925. It relies on both the foreign trade data of the importing countries and of the major exporting countries the industrialized economies of the time. The quality of foreign trade figures is tested; an homogeneous estimate of capital goods imported is derived, and its per capita ranking is discussed providing new light on Latin American development levels before import substitution.
Resumo:
Medicine counterfeiting is a serious worldwide issue, involving networks of manufacture and distribution that are an integral part of industrialized organized crime. Despite the potentially devastating health repercussions involved, legal sanctions are often inappropriate or simply not applied. The difficulty in agreeing on a definition of counterfeiting, the huge profits made by the counterfeiters and the complexity of the market are the other main reasons for the extent of the phenomenon. Above all, international cooperation is needed to thwart the spread of counterfeiting. Moreover effort is urgently required on the legal, enforcement and scientific levels. Pharmaceutical companies and agencies have developed measures to protect the medicines and allow fast and reliable analysis of the suspect products. Several means, essentially based on chromatography and spectroscopy, are now at the disposal of the analysts to enable the distinction between genuine and counterfeit products. However the determination of the components and the use of analytical data for forensic purposes still constitute a challenge. The aim of this review article is therefore to point out the intricacy of medicine counterfeiting so that a better understanding can provide solutions to fight more efficiently against it.
Resumo:
Abstract The complexity of the current business world is making corporate disclosure more and more important for information users. These users, including investors, financial analysts, and government authorities rely on the disclosed information to make their investment decisions, analyze and recommend shares, and to draft regulation policies. Moreover, the globalization of capital markets has raised difficulties for information users in understanding the differences incorporate disclosure across countries and across firms. Using a sample of 797 firms from 34 countries, this thesis advances the literature on disclosure by illustrating comprehensively the disclosure determinants originating at firm systems and national systems based on the multilevel latent variable approach. Under this approach, the overall variation associated with the firm-specific variables is decomposed into two parts, the within-country and the between-country part. Accordingly, the model estimates the latent association between corporate disclosure and information demand at two levels, the within-country and the between-country level. The results indicate that the variables originating from corporate systems are hierarchically correlated with those from the country environment. The information demand factor indicated by the number of exchanges listed and the number of analyst recommendations can significantly explain the variation of corporate disclosure for both "within" and "between" countries. The exogenous influences of firm fundamentals-firm size and performance-are exerted indirectly through the information demand factor. Specifically, if the between-country variation in firm variables is taken into account, only the variables of legal systems and economic growth keep significance in explaining the disclosure differences across countries. These findings strongly support the hypothesis that disclosure is a response to both corporate systems and national systems, but the influence of the latter on disclosure reflected significantly through that of the former. In addition, the results based on ADR (American Depositary Receipt) firms suggest that the globalization of capital markets is harmonizing the disclosure behavior of cross-boundary listed firms, but it cannot entirely eliminate the national features in disclosure and other firm-specific characteristics.
Resumo:
Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.
Resumo:
Whether for investigative or intelligence aims, crime analysts often face up the necessity to analyse the spatiotemporal distribution of crimes or traces left by suspects. This article presents a visualisation methodology supporting recurrent practical analytical tasks such as the detection of crime series or the analysis of traces left by digital devices like mobile phone or GPS devices. The proposed approach has led to the development of a dedicated tool that has proven its effectiveness in real inquiries and intelligence practices. It supports a more fluent visual analysis of the collected data and may provide critical clues to support police operations as exemplified by the presented case studies.
Resumo:
Visual inspection remains the most frequently applied method for detecting treatment effects in single-case designs. The advantages and limitations of visual inference are here discussed in relation to other procedures for assessing intervention effectiveness. The first part of the paper reviews previous research on visual analysis, paying special attention to the validation of visual analysts" decisions, inter-judge agreement, and false alarm and omission rates. The most relevant factors affecting visual inspection (i.e., effect size, autocorrelation, data variability, and analysts" expertise) are highlighted and incorporated into an empirical simulation study with the aim of providing further evidence about the reliability of visual analysis. Our results concur with previous studies that have reported the relationship between serial dependence and increased Type I rates. Participants with greater experience appeared to be more conservative and used more consistent criteria when assessing graphed data. Nonetheless, the decisions made by both professionals and students did not match sufficiently the simulated data features, and we also found low intra-judge agreement, thus suggesting that visual inspection should be complemented by other methods when assessing treatment effectiveness.
Resumo:
Although many larger Iowa cities have staff traffic engineers who have a dedicated interest in safety, smaller jurisdictions do not. Rural agencies and small communities must rely on consultants, if available, or local staff to identify locations with a high number of crashes and to devise mitigating measures. However, smaller agencies in Iowa have other available options to receive assistance in obtaining and interpreting crash data. These options are addressed in this manual. Many proposed road improvements or alternatives can be evaluated using methods that do not require in-depth engineering analysis. The Iowa Department of Transportation (DOT) supported developing this manual to provide a tool that assists communities and rural agencies in identifying and analyzing local roadway-related traffic safety concerns. In the past, a limited number of traffic safety professionals had access to adequate tools and training to evaluate potential safety problems quickly and efficiently and select possible solutions. Present-day programs and information are much more conducive to the widespread dissemination of crash data, mapping, data comparison, and alternative selections and comparisons. Information is available and in formats that do not require specialized training to understand and use. This manual describes several methods for reviewing crash data at a given location, identifying possible contributing causes, selecting countermeasures, and conducting economic analyses for the proposed mitigation. The Federal Highway Administration (FHWA) has also developed other analysis tools, which are described in the manual. This manual can also serve as a reference for traffic engineers and other analysts.
Resumo:
Vehicle-pedestrian crashes are a major concern for highway safety analysts. Research reported by Hunter in 1996 indicated that one-third of the 5,000 vehicle-pedestrian crashes investigated occurred at intersections, and 40 percent of those were at non-controlled intersections (Hunter et al. 1996). Numerous strategies have been implemented in an effort to reduce these accidents, including overhead signs, flashing warning beacons, wider and brighter markings on the street, and advanced crossing signs. More recently, pedestrian-activated, in-street flashing lights at the crosswalk and pedestrian crossing signs in the traffic lane have been investigated. Not all of these strategies are recognized as accepted practices and included in the Manual on Uniform Traffic Control Devices (MUTCD), but the Federal Highway Administration (FHWA) is supportive of experimental applications that may lead to effective technology that helps reduce crashes.
Resumo:
The final year project came to us as an opportunity to get involved in a topic which has appeared to be attractive during the learning process of majoring in economics: statistics and its application to the analysis of economic data, i.e. econometrics.Moreover, the combination of econometrics and computer science is a very hot topic nowadays, given the Information Technologies boom in the last decades and the consequent exponential increase in the amount of data collected and stored day by day. Data analysts able to deal with Big Data and to find useful results from it are verydemanded in these days and, according to our understanding, the work they do, although sometimes controversial in terms of ethics, is a clear source of value added both for private corporations and the public sector. For these reasons, the essence of this project is the study of a statistical instrument valid for the analysis of large datasets which is directly related to computer science: Partial Correlation Networks.The structure of the project has been determined by our objectives through the development of it. At first, the characteristics of the studied instrument are explained, from the basic ideas up to the features of the model behind it, with the final goal of presenting SPACE model as a tool for estimating interconnections in between elements in large data sets. Afterwards, an illustrated simulation is performed in order to show the power and efficiency of the model presented. And at last, the model is put into practice by analyzing a relatively large data set of real world data, with the objective of assessing whether the proposed statistical instrument is valid and useful when applied to a real multivariate time series. In short, our main goals are to present the model and evaluate if Partial Correlation Network Analysis is an effective, useful instrument and allows finding valuable results from Big Data.As a result, the findings all along this project suggest the Partial Correlation Estimation by Joint Sparse Regression Models approach presented by Peng et al. (2009) to work well under the assumption of sparsity of data. Moreover, partial correlation networks are shown to be a very valid tool to represent cross-sectional interconnections in between elements in large data sets.The scope of this project is however limited, as there are some sections in which deeper analysis would have been appropriate. Considering intertemporal connections in between elements, the choice of the tuning parameter lambda, or a deeper analysis of the results in the real data application are examples of aspects in which this project could be completed.To sum up, the analyzed statistical tool has been proved to be a very useful instrument to find relationships that connect the elements present in a large data set. And after all, partial correlation networks allow the owner of this set to observe and analyze the existing linkages that could have been omitted otherwise.
Resumo:
Työn tavoitteena on kehittää Microsoft Excel -taulukkolaskentaohjelmaan pohjautuva arvonmääritysmalli. Mallin avulla osaketutkimusta tekevät analyytikot ja sijoittajat voivat määrittää osakkeen fundamenttiarvon. Malli kehitetään erityisesti piensijoittajien työkaluksi. Työn toisena tavoitteena on soveltaa kehitettyä arvonmääritysmallia case-yrityksenä toimivan F-Securen arvonmäärityksessä ja selvittää mallin avulla onko F-Securen osake pörssissä fundamentteihin nähden oikein hinnoiteltu. Työn teoriaosassa esitellään arvonmäärityksen käyttökohteet ja historia, arvonmääritysprosessin vaiheet (strateginen analyysi, tilinpäätösanalyysi, tulevaisuuden ennakointi, yrityksen arvon laskeminen), pääoman kustannuksen määrittäminen ja sijoittajan eri arvonmääritysmenetelmät, joita ovat diskontattuun kassavirtaan perustuvassa arvonmäärityksessä käytettävät mallit sekä suhteellisen arvonmäärityksentunnusluvut. Empiirinen osa käsittää arvonmääritysmallin kehittämisen ja rakenteen kuvauksen sekä F-Securen arvonmääritysprosessin. Vaikka F-Securen tulevaisuus näyttää varsin valoisalta, osake on hinnoiteltu markkinoilla tällä hetkellä(23.02.2006) korkeammalle kuin näihin odotuksiin nähden olisi järkevää. Eri menetelmät antavat osakkeelle arvoja 2,25 euron ja 2,97 euron väliltä. Kehitetty Excel -malli määrittää F-Securen osakkeen tavoitehinnaksi eri menetelmien mediaanina 2,29 euroa. Tutkimuksen tuloksena F-Securen osaketta voidaan pitää yliarvostettuna, sillä sen hinta pörssissä on 3,05 euroa.
Resumo:
Tutkimusalueena on henkilöstöraportointi ja henkilöstötilinpäätöksen tekeminen. Tavoitteena on selvittää henkilöstöraportoinnin tekemisestä saatava taloudellinen hyöty yritykselle. Teoriaosuudessa olen käyttänyt deskriptiivistä tutkimusmenetelmää ja empiriaosuudessa normatiivista. Empiriaosuus on rajattu UPM-Kymmene Oyj:n Voikkaan paperitehtaaseen. Case-organisaatiolle valitaan henkilöstötunnusluvut ja kehitetään sille sopiva henkilöstötunnuslukujen seurannan malli. Tunnusluvut ovat työkaluna analysoitaessa henkilöstön tämän hetkistä rakennetta, laatua ja työhyvinvointia. Pitkän aikavälin tuloksen tekemisessä henkilöstön osaamisella, työkyvyllä ja työmotivaatiolla on suuri merkitys. Osaamisen kehittymistä ja työhyvinvointia on seurattava kyselyillä. Brändilupausten, kuten esim. laadun ja osaamisen kehittämisen seurannassa voidaan myös hyödyntää henkilöstöraportoinnin tuottamia tunnuslukuja. Henkilöstöraportointi tuottaa informaatiota eri sidosryhmille: omistajat, sijoittajat, henkilöstö, asiakkaat, pankit, analyytikot ja toimittajat.
Resumo:
Tutkimus koostuu neljästä artikkelista, jotka käsittelevät suomalaisten pienten ja keskisuurten teollisuusyritysten (PKT-yritysten) innovatiivisuutta, sen attribuutteja (ominaispiirteitä) sekä indikaattoreita. Tutkimuksessa tarkastellaan sekä kirjallisuudessa esitettyjä että PKT-johtajien ja PKT-yritystenkehityshankkeiden rahoituspäätöksiin osallistuvien yritystutkijoiden haastatteluissa esittämiä innovatiivisuuden määritelmiä. Innovatiivisuusindikaattoreista tarkastellaan PKT-yritysten kehittämishankkeille sovellettavia rahoitus- ja arviointikriteerejä sekä yritysten ulkopuolisten rahoittajien että PKT-johtajien näkökulmasta. Erityistä huomiota kohdistetaan sovellettuihin laadullisiin ja ei-numeerisiin innovatiivisuuden arviointikriteereihin. Sekä kirjallisuudessa että kymmenen yritystutkijan ja kuuden esimerkkiyrityksen johtajan haastatteluissa innovaation uutuus yhdistetään innovatiivisuuteen. Muita tärkeitä innovatiivisuuteen liitettyjä ominaisuuksia olivat markkinat, muista yrityksistä erottuminen ja yksilöiden luovuus. Ihmisläheiset ja yksilöihin liittyvät näkökulmat korostuvat yritystutkijoiden ja PKT-johtajien innovatiivisuuden määritelmissä, kun taas kirjallisuudessa annetaan enemmän painoa ympäristölle, tuotteille ja markkinoille. Yritystutkijat pitivät yritykseen ja sen johtajaan liittyviä tekijöitä tärkeinä rahoitettavien kehittämishankkeiden panosten arviointikriteereinä. Tuotteiden kaupallinen menestys oli rahoittajan kannalta tärkein tulostekijä. Tarkastelluissa esimerkkiyrityksissä kehityshankkeista päättäminen ja hankkeiden arviointi on toisaalta intuitiivista ja saattaa olla tiedostamatontakin, koska yritysten kehittämistoiminta on vähäistä. Pienyritysten johtajat korostavat arvioinnissa rahallisiamittareita, vaikka sekä numeerisia että laadullisia kriteereitä sovelletaan. Todennäköisin syy tälle on pienyritysten rajalliset taloudelliset voimavarat. Toinen mahdollinen syy rahoituksellisten tekijöiden painottamiseen on, että tämän päivän ihannejohtaja ymmärretään analyyttiseksi ja mm.rahavirtoja valvovaksi. Kuitenkin innovatiiviset yritysjohtajat pitävät innovaatioiden luomista yhtenä elämän hauskoista puolista. Innovatiiviset esimerkkiyritykset ovat tulevaisuuteen ja kasvuun suuntautuneita strategisella tasolla. Operationaalisella tasolla ne tuottavat keksintöjä ja innovaatioita. Patentteja tarkastelluilla yrityksillä on kuitenkin vähän. Sekä innovatiiviset että vähemmän innovatiiviset esimerkkiyritykset ovat voimakkaasti asiakassuuntautuneita ja erikoistuneita tiettyihin tuotteisiin ja asiakkaisiin. Asiakkaiden tarpeita tyydytetään kehittämällä niitä vastaavia tuotteita. Tästä johtuu, että valtaosa yritysten kehittämistoiminnasta kohdistuu tuotteisiin tai tuotantoon.
Resumo:
La spectroscopie infrarouge (FTIR) est une technique de choix dans l'analyse des peintures en spray (traces ou bonbonnes de référence), grâce à son fort pouvoir discriminant, sa sensibilité, et ses nombreuses possibilités d'échantillonnage. La comparaison des spectres obtenus est aujourd'hui principalement faite visuellement, mais cette procédure présente des limitations telles que la subjectivité de la prise de décision car celle-ci dépend de l'expérience et de la formation suivie par l'expert. De ce fait, de faibles différences d'intensités relatives entre deux pics peuvent être perçues différemment par des experts, même au sein d'un même laboratoire. Lorsqu'il s'agit de justifier ces différences, certains les expliqueront par la méthode analytique utilisée, alors que d'autres estimeront plutôt qu'il s'agit d'une variabilité intrinsèque à la peinture et/ou à son vécu (par exemple homogénéité, sprayage, ou dégradation). Ce travail propose d'étudier statistiquement les différentes sources de variabilité observables dans les spectres infrarouges, de les identifier, de les comprendre et tenter de les minimiser. Le deuxième objectif principal est de proposer une procédure de comparaison des spectres qui soit davantage transparente et permette d'obtenir des réponses reproductibles indépendamment des experts interrogés. La première partie du travail traite de l'optimisation de la mesure infrarouge et des principaux paramètres analytiques. Les conditions nécessaires afin d'obtenir des spectres reproductibles et minimisant la variation au sein d'un même échantillon (intra-variabilité) sont présentées. Par la suite une procédure de correction des spectres est proposée au moyen de prétraitements et de sélections de variables, afin de minimiser les erreurs systématiques et aléatoires restantes, et de maximiser l'information chimique pertinente. La seconde partie présente une étude de marché effectuée sur 74 bonbonnes de peintures en spray représentatives du marché suisse. Les capacités de discrimination de la méthode FTIR au niveau de la marque et du modèle sont évaluées au moyen d'une procédure visuelle, et comparées à diverses procédures statistiques. Les limites inférieures de discrimination sont testées sur des peintures de marques et modèles identiques mais provenant de différents lots de production. Les résultats ont montré que la composition en pigments était particulièrement discriminante, à cause des étapes de corrections et d'ajustement de la couleur subies lors de la production. Les particularités associées aux peintures en spray présentes sous forme de traces (graffitis, gouttelettes) ont également été testées. Trois éléments sont mis en évidence et leur influence sur le spectre infrarouge résultant testée : 1) le temps minimum de secouage nécessaire afin d'obtenir une homogénéité suffisante de la peinture et, en conséquence, de la surface peinte, 2) la dégradation initiée par le rayonnement ultra- violet en extérieur, et 3) la contamination provenant du support lors du prélèvement. Finalement une étude de population a été réalisée sur 35 graffitis de la région lausannoise et les résultats comparés à l'étude de marché des bonbonnes en spray. La dernière partie de ce travail s'est concentrée sur l'étape de prise de décision lors de la comparaison de spectres deux-à-deux, en essayant premièrement de comprendre la pratique actuelle au sein des laboratoires au moyen d'un questionnaire, puis de proposer une méthode statistique de comparaison permettant d'améliorer l'objectivité et la transparence lors de la prise de décision. Une méthode de comparaison basée sur la corrélation entre les spectres est proposée, et ensuite combinée à une évaluation Bayesienne de l'élément de preuve au niveau de la source et au niveau de l'activité. Finalement des exemples pratiques sont présentés et la méthodologie est discutée afin de définir le rôle précis de l'expert et des statistiques dans la procédure globale d'analyse des peintures. -- Infrared spectroscopy (FTIR) is a technique of choice for analyzing spray paint speciments (i.e. traces) and reference samples (i.e. cans seized from suspects) due to its high discriminating power, sensitivity and sampling possibilities. The comparison of the spectra is currently carried out visually, but this procedure has limitations such as the subjectivity in the decision due to its dependency on the experience and training of the expert. This implies that small differences in the relative intensity of two peaks can be perceived differently by experts, even between analysts working in the same laboratory. When it comes to justifying these differences, some will explain them by the analytical technique, while others will estimate that the observed differences are mostly due to an intrinsic variability from the paint sample and/or its acquired characteristics (for example homogeneity, spraying, or degradation). This work proposes to statistically study the different sources of variability observed in infrared spectra, to identify them, understand them and try to minimize them. The second goal is to propose a procedure for spectra comparison that is more transparent, and allows obtaining reproducible answers being independent from the expert. The first part of the manuscript focuses on the optimization of infrared measurement and on the main analytical parameters. The necessary conditions to obtain reproducible spectra with a minimized variation within a sample (intra-variability) are presented. Following that a procedure of spectral correction is then proposed using pretreatments and variable selection methods, in order to minimize systematic and random errors, and increase simultaneously relevant chemical information. The second part presents a market study of 74 spray paints representative of the Swiss market. The discrimination capabilities of FTIR at the brand and model level are evaluated by means of visual and statistical procedures. The inferior limits of discrimination are tested on paints coming from the same brand and model, but from different production batches. The results showed that the pigment composition was particularly discriminatory, because of the corrections and adjustments made to the paint color during its manufacturing process. The features associated with spray paint traces (graffitis, droplets) were also tested. Three elements were identified and their influence on the resulting infrared spectra were tested: 1) the minimum shaking time necessary to obtain a sufficient homogeneity of the paint and subsequently of the painted surface, 2) the degradation initiated by ultraviolet radiation in an exterior environment, and 3) the contamination from the support when paint is recovered. Finally a population study was performed on 35 graffitis coming from the city of Lausanne and surroundings areas, and the results were compared to the previous market study of spray cans. The last part concentrated on the decision process during the pairwise comparison of spectra. First, an understanding of the actual practice among laboratories was initiated by submitting a questionnaire. Then, a proposition for a statistical method of comparison was advanced to improve the objectivity and transparency during the decision process. A method of comparison based on the correlation between spectra is proposed, followed by the integration into a Bayesian framework at both source and activity levels. Finally, some case examples are presented and the recommended methodology is discussed in order to define the role of the expert as well as the contribution of the tested statistical approach within a global analytical sequence for paint examinations.
Resumo:
L’objectiu d’aquest treball ha estat l’anàlisi dels dos nous documents comptables, l’Estat de Canvis en el Patrimoni Net – ECPN – i l’Estat de Fluxos d’Efectiu – EFE‐, introduïts amb la reforma comptable de 2007 [Reial Decret 1514/2007 de 16 de novembre i Reial Decret 1515/2007 per a petites i mitjanes empreses]. Aquest anàlisis s’ha portat a terme amb la finalitat d’analitzar com ha millorat i augmentat la informació per la presa de millors decisions tant per les empreses com per analistes externs. El treball consta de dues parts, la primera part està formada per un anàlisis teòric d’aquests dos documents. La segona part està formada per un anàlisis pràctic de les empreses del sector de Materials Bàsics, Indústria i Construcció de la Borsa de Madrid. D’aquestes empreses s’ha realitzat un anàlisis convencional i l’ anàlisis dels corresponents ECPN i EFE de cadascuna de les empreses, amb l’objectiu de comprovar la millora de la informació.
Resumo:
Tutkielman päätavoitteena oli selvittää miten analyytikot ja yritysjohto arvioivat aineettoman pääoman arvon yrityksen arvossa sekä poikkeavatko arviot toisistaan. Tutkielma jakaantui kahteen osaan. Teoriaosassa muodostettiin kirjallisuustutkimuksen avulla käsitejärjestelmä, minkä avulla aineettoman pääoman osa-aluetta voitiin ymmärtää. Empiirinen osa toteutettiin teemahaastatteluina. Empiirisen tutkimuksen perusteella todettiin, että aineettoman pääoman mittaaminen ja arvottaminen ovat yrityselämässä vielä varhaisessa vaiheessa. Analyytikoilla oli käytettävissään rajallinen määrä tietoa, minkä perusteella aineettoman pääoman arvottaminen yrityskokonaisuudesta irrallisena ei ollut mielekästä. Analyytikot seurasivat asiakaspääoman kehitystä sekä muodostivat subjektiivisen näkemyksen henkisestä pääomasta sekä infrastruktuuripääomasta, mitkä yhdessä tukivat käsitystä yrityksestä kokonaisuutena. Yrityksessä mitattiin ja raportoitiin aineettomasta pääomasta lähinnä asiakaspääoman tekijöitä. Yrityksen näkökulmasta kokonaisvaltaisen aineettoman pääoman mittariston ongelmaksi todettiin, että yleiset mittareille asetetut vaatimukset eivät täyty. Analyytikkojen ja yritysjohdon arviot aineettoman pääoman arvosta eivät poikenneet merkittävästi. Tutkielmassa arvioitiin, että aineettoman pääoman mittaamisen ja ulkoisen raportoinnin yleistyessä, tietämys sekä yrityksessä että sen ulkopuolella kasvaa ja aineettoman pääoman arvottaminen mahdollistuu.