742 resultados para measuring capabilities
Resumo:
Diplomityössä on käsitelty paperin pinnankarkeuden mittausta, joka on keskeisimpiä ongelmia paperimateriaalien tutkimuksessa. Paperiteollisuudessa käytettävät mittausmenetelmät sisältävät monia haittapuolia kuten esimerkiksi epätarkkuus ja yhteensopimattomuus sileiden papereiden mittauksissa, sekä suuret vaatimukset laboratorio-olosuhteille ja menetelmien hitaus. Työssä on tutkittu optiseen sirontaan perustuvia menetelmiä pinnankarkeuden määrittämisessä. Konenäköä ja kuvan-käsittelytekniikoita tutkittiin karkeilla paperipinnoilla. Tutkimuksessa käytetyt algoritmit on tehty Matlab® ohjelmalle. Saadut tulokset osoittavat mahdollisuuden pinnankarkeuden mittaamiseen kuvauksen avulla. Parhaimman tuloksen perinteisen ja kuvausmenetelmän välillä antoi fraktaaliulottuvuuteen perustuva menetelmä.
Resumo:
Introduction Occupational therapists could play an important role in facilitating driving cessation for ageing drivers. This, however, requires an easy-to-learn, standardised on-road evaluation method. This study therefore investigates whether use of P-drive' could be reliably taught to occupational therapists via a short half-day training session. Method Using the English 26-item version of P-drive, two occupational therapists evaluated the driving ability of 24 home-dwelling drivers aged 70 years or over on a standardised on-road route. Experienced driving instructors' on-road, subjective evaluations were then compared with P-drive scores. Results Following a short half-day training session, P-drive was shown to have almost perfect between-rater reliability (ICC2,1=0.950, 95% CI 0.889 to 0.978). Reliability was stable across sessions including the training phase even if occupational therapists seemed to become slightly less severe in their ratings with experience. P-drive's score was related to the driving instructors' subjective evaluations of driving skills in a non-linear manner (R-2=0.445, p=0.021). Conclusion P-drive is a reliable instrument that can easily be taught to occupational therapists and implemented as a way of standardising the on-road driving test.
Resumo:
In the present study, we examined seawater biofiltration in terms of adenosine triphosphate (ATP) and turbidity. A pilot biofilter continuously fed with fresh seawater reduced both turbidity and biological activity measured by ATP. Experiments operated with an empty bed contact time (EBCT) of between 2 and 14 min resulted in cellular ATP removals of 32% to 60% and turbidity removals of 38% to 75%. Analysis of the water from backwashing the biofilter revealed that the first half of the biofilter concentrated around 80% of the active biomass and colloidal material that produces turbidity. By reducing the EBCT, the biological activity moved from the first part of the biofilter to the end. Balances of cellular ATP and turbidity between consecutive backwashings indicated that the biological activity generated in the biofilter represented more than 90% of the detached cellular ATP. In contrast, the trapped ATP was less than 10% of the overall cellular ATP detached during the backwashing process. Furthermore, the biological activity generated in the biofilter seemed to be more dependent on the elapsed time than the volume filtered. In contrast, the turbidity trapped in the biofilter was proportional to the volume filtered, although a slightly higher amount of turbidity was found in the backwashing water; this was probably due to attrition of the bed medium. Finally, no correlations were found between turbidity and ATP, indicating that the two parameters focus on different matter. This suggests that turbidity should not be used as an alternative to cellular concentration.
Resumo:
Tutkielman tavoitteena oli selvittää dynaamisten kyvykkyyksien teorian kehittymistä ja nykytilaa. Työssä tarkastellaan myös mahdollisuuksia yhdistää reaalioptioajattelua ja dynaamisten kyvykkyyksien teoriaa. Tutkielma on toteutettu teoreettisena kirjallisuuskatsauksena. Dynaamisten kyvykkyyksien teorian mukaan muuttuvassa toimintaympäristössä yritysten kilpailuetu perustuu kykyyn rakentaa, yhdistää ja muokata resursseja ja kyvykkyyksiä. Yritysten täytyy pystyä löytämään, sulauttamaan ja muuntamaan tietoa voidakseen tunnistaa uusia mahdollisuuksia ja pystyäkseen reagoimaan niihin. Tutkielma tuo esille uusia yhteyksiä dynaamisten kyvykkyyksien teorian ja yritysten käyttäytymisen välillä. Reaalioptioajattelu auttaa tunnistamaan yrityksen rajojen määrittämiseen vaikuttavia tekijöitä. Työssä tehdään ehdotuksia dynaamisten kyvykkyyksien teorian jatkotutkimusta varten.
Resumo:
WAP tulee olemaan tulevaisuudessa tärkeässä roolissa, kun etsitään sopivaa tiedonsiirtoprotokollaa uusille mobiilipalveluille. Vaikka WAP jollain tavalla epäonnistui ensimmäisessä tulemisessaan sen suosio varmasti tulevaisuudessa tulee kasvamaan. WAP:in heikko suosio ei johtunut niinkään protokollan tiedonsiirto ominaisuuksista, vaan WAP-palveluiden kehittymättömyydestä. Tulevaisuuden palvelut kuitenkin ovat kehittyneempiä ja WAP:in suosio tulee kasvamaan. Viimeisimpänä WAP:ia käyttävänä palveluna on esitelty MMS. Kun uudet WAP:iin pohjautuvat palvelut yleistyvät, asettaa se uusia vaatimuksia myös WAP gateway:lle. Työssä tarkastellaan erilaisia mahdollisuuksia mitata mobiilisti WAP palvelujen palvelun tasoa. Työssä myös toteutetaan mobiili WAP palveluiden mittauskomponentti, joka toimii osana laajempaa ohjelmistoa. Tarkoituksena on toteuttaa mittauskomponentti, joka emuloi mahdollisimman hyvin todellista loppukäyttäjää.
Resumo:
Social reciprocity may explain certain emerging psychological processes, which are likely to be founded on dyadic relations. Although some indices and statistics have been proposed to measure and make statistical decisions regarding social reciprocity in groups, these were generally developed to identify association patterns rather than to quantify the discrepancies between what each individual addresses to his/her partners and what is received from them in return. Additionally, social researchers are not only interested in measuring groups at the global level, since dyadic and individual measurements are also necessary for a proper description of social interactions. This study is concerned with a new statistic for measuring social reciprocity at the global level and with decomposing it in order to identify those dyads and individuals which account for a significant part of asymmetry in social interactions. In addition to a set of indices some exact analytical results are derived and a way of making statistical decisions is proposed.
Resumo:
We analyze the behavior of complex information in the Fresnel domain, taking into account the limited capability to display complex values of liquid crystal devices when they are used as holographic displays. To do this analysis we study the reconstruction of Fresnel holograms at several distances using the different parts of the complex distribution. We also use the information adjusted with a method that combines two configurations of the devices in an adding architecture. The results of the error analysis show different behavior for the reconstructions when using the different methods. Simulated and experimental results are presented.
Resumo:
In this study the theoretical part was created to make comparison between different Value at Risk models. Based on that comparison one model was chosen to the empirical part which concentrated to find out whether the model is accurate to measure market risk. The purpose of this study was to test if Volatility-weighted Historical Simulation is accurate in measuring market risk and what improvements does it bring to market risk measurement compared to traditional Historical Simulation. Volatility-weighted method by Hull and White (1998) was chosen In order to improve the traditional methods capability to measure market risk. In this study we found out that result based on Historical Simulation are dependent on chosen time period, confidence level and how samples are weighted. The findings of this study are that we cannot say that the chosen method is fully reliable in measuring market risk because back testing results are changing during the time period of this study.
Resumo:
It has been shown in recent ALICE@LHC measurements that the odd flow harmonics, in particular, a directed flow v1, occurred to be weak and dominated by random fluctuations. In this work we propose a new method, which makes the measurements more sensitive to the flow patterns showing global collective symmetries. We demonstrate how the longitudinal center of mass rapidity fluctuations can be identified, and then the collective flow analysis can be performed in the event-by-event center of mass frame. Such a method can be very effective in separating the flow patterns originating from random fluctuations, and the flow patterns originating from the global symmetry of the initial state.
Resumo:
The functional method is a new test theory using a new scoring method that assumes complexity in test structure, and thus takes into account every correlation between factors and items. The main specificity of the functional method is to model test scores by multiple regression instead of estimating them by using simplistic sums of points. In order to proceed, the functional method requires the creation of hyperspherical measurement space, in which item responses are expressed by their correlation with orthogonal factors. This method has three main qualities. First, measures are expressed in the absolute metric of correlations; therefore, items, scales and persons are expressed in the same measurement space using the same single metric. Second, factors are systematically orthogonal and without errors, which is optimal in order to predict other outcomes. Such predictions can be performed to estimate how one would answer to other tests, or even to model one's response strategy if it was perfectly coherent. Third, the functional method provides measures of individuals' response validity (i.e., control indices). Herein, we propose a standard procedure in order to identify whether test results are interpretable and to exclude invalid results caused by various response biases based on control indices.
Resumo:
Organizations gain resources, skills and technologies to find out the ultimate mix of capabilities to be a winner in the competitive market. These are all important factors that need to be taken into account in organizations operating in today's business environment. So far, there are no significant studies on the organizational capabilities in the field of PSM. The literature review shows that the PSM capabilities need to be studied more comprehensively. This study attempts to reveal and fill this gap by providing the PSM capability matrix that identifies the key PSM capabilities approached from two angles: there are three primary PSM capabilities and nine subcapabilities and, moreover, the individual and organizational PSM capabilities are identified and evaluated. The former refers to the PSM capability matrix of this study which is based on the strategic and operative PSM capabilities that complement the economic ones, while the latter relates to the evaluation of the PSM capabilities, such as the buyer profiles of individual PSM capabilities and the PSMcapability map of the organizational ones. This is a constructive case study. The aim is to define what the purchasing and supply management capabilities are and how they can be evaluated. This study presents a PSM capability matrix to identify and evaluate the capabilities to define capability gaps by comparing the ideal level of PSM capabilities to the realized ones. The research questions are investigated with two case organizations. This study argues that PSM capabilities can be classified into three primary categories with nine sub-categories and, thus, a PSM capability matrix with four evaluation categories can be formed. The buyer profiles are moreover identified to reveal the PSM capability gap. The resource-based view (RBV) and dynamic capabilities view (DCV) are used to define the individual and organizational capabilities. The PSM literature is also used to define the capabilities. The key findings of this study are i) the PSM capability matrix to identify the PSM capabilities, ii) the evaluation of the capabilities to define PSM capability gaps and iii) the presentation of the buyer profiles to identify the individual PSM capabilities and to define the organizational PSM capabilities. Dynamic capabilities are also related to the PSM capability gap. If a gap is identified, the organization can renew their PSM capabilities and, thus, create mutual learning and increase their organizational capabilities. And only then, there is potential for dynamic capabilities. Based on this, the purchasing strategy, purchasing policy and procedures should be identified and implemented dynamically.
Resumo:
In recent years there has been growing interest in composite indicators as an efficient tool of analysis and a method of prioritizing policies. This paper presents a composite index of intermediary determinants of child health using a multivariate statistical approach. The index shows how specific determinants of child health vary across Colombian departments (administrative subdivisions). We used data collected from the 2010 Colombian Demographic and Health Survey (DHS) for 32 departments and the capital city, Bogotá. Adapting the conceptual framework of Commission on Social Determinants of Health (CSDH), five dimensions related to child health are represented in the index: material circumstances, behavioural factors, psychosocial factors, biological factors and the health system. In order to generate the weight of the variables, and taking into account the discrete nature of the data, principal component analysis (PCA) using polychoric correlations was employed in constructing the index. From this method five principal components were selected. The index was estimated using a weighted average of the retained components. A hierarchical cluster analysis was also carried out. The results show that the biggest differences in intermediary determinants of child health are associated with health care before and during delivery.
Resumo:
The RFLP/PCR approach (restriction fragment length polymorphism/polymerase chain reaction) to genotypic mutation analysis described here measures mutations in restriction recognition sequences. Wild-type DNA is restricted before the resistant, mutated sequences are amplified by PCR and cloned. We tested the capacity of this experimental design to isolate a few copies of a mutated sequence of the human c-Ha-ras1 gene from a large excess of wild-type DNA. For this purpose we constructed a 272 bp fragment with 2 mutations in the PvuII recognition sequence 1727-1732 and studied the rescue by RFLP/PCR of a few copies of this 'PvuII mutant standard'. Following amplification with Taq-polymerase and cloning into lambda gt10, plaques containing wild-type sequence, PvuII mutant standard or Taq-polymerase induced bp changes were quantitated by hybridization with specific oligonucleotide probes. Our results indicate that 10 PvuII mutant standard copies can be rescued from 10(8) to 10(9) wild-type sequences. Taq polymerase errors originating from unrestricted, residual wild-type DNA were sequence dependent and consisted mostly of transversions originating at G.C bp. In contrast to a doubly mutated 'standard' the capacity to rescue single bp mutations by RFLP/PCR is limited by Taq-polymerase errors. Therefore, we assessed the capacity of our protocol to isolate a G to T transversion mutation at base pair 1698 of the MspI-site 1695-1698 of the c-Ha-ras1 gene from excess wild-type ras1 DNA. We found that 100 copies of the mutated ras1 fragment could be readily rescued from 10(8) copies of wild-type DNA.