969 resultados para Measurement tool
Resumo:
This study concerns performance measurement and management in a collaborative network. Collaboration between companies has been increased in recent years due to the turbulent operating environment. The literature shows that there is a need for more comprehensive research on performance measurement in networks and the use of measurement information in their management. This study examines the development process and uses of a performance measurement system supporting performance management in a collaborative network. There are two main research questions: how to design a performance measurement system for a collaborative network and how to manage performance in a collaborative network. The work can be characterised as a qualitative single case study. The empirical data was collected in a Finnish collaborative network, which consists of a leading company and a reseller network. The work is based on five research articles applying various research methods. The research questions are examined at the network level and at the single network partner level. The study contributes to the earlier literature by producing new and deeper understanding of network-level performance measurement and management. A three-step process model is presented to support the performance measurement system design process. The process model has been tested in another collaborative network. The study also examines the factors affecting the process of designing the measurement system. The results show that a participatory development style, network culture, and outside facilitators have a positive effect on the design process. The study increases understanding of how to manage performance in a collaborative network and what kind of uses of performance information can be identified in a collaborative network. The results show that the performance measurement system is an applicable tool to manage the performance of a network. The results reveal that trust and openness increased during the utilisation of the performance measurement system, and operations became more transparent. The study also presents a management model that evaluates the maturity of performance management in a collaborative network. The model is a practical tool that helps to analyse the current stage of the performance management of a collaborative network and to develop it further.
Resumo:
Salivary cortisol is an index of plasma free cortisol and is obtained by a noninvasive procedure. We have been using salivary cortisol as a tool for physiological and diagnostic studies, among them the emergence of circadian rhythm in preterm and term infants. The salivary cortisol circadian rhythm in term and premature infants was established between 8 and 12 postnatal weeks. In the preterm infants the emergence of circadian rhythm was parallel to the onset of sleep rhythm. We also studied the use of salivary cortisol for screening for Cushing's syndrome (CS) in control and obese outpatients based on circadian rhythm and the overnight 1 mg dexamethasone (DEX) suppression test. Salivary cortisol was suppressed to less than 100 ng/dl after 1 mg DEX in control and obese patients. A single salivary cortisol measurement at 23:00 h and again after 1 mg DEX above the 90th percentile of the obese group values had sensitivity and specificity of 93 and 93% (23:00 h), and 91 and 94% (after DEX), respectively. The sensitivity improved to 100% when we combined both parameters. We also studied 11 CS children and 21 age-matched primary obese children for whom salivary cortisol sensitivity and specificity were 100/95% (23:00 h), and 100/95% (1 mg DEX), respectively. Similar to adults, sensitivity and specificity of 100% were obtained by combining 23:00 h and 1 mg DEX. The measurement of salivary cortisol is a useful tool for physiological studies and for the diagnosis of CS in children and adults on an outpatient basis.
Resumo:
Significant improvements have been noted in heart transplantation with the advent of cyclosporine. However, cyclosporine use is associated with significant side effects, such as chronic renal failure. We were interested in evaluating the incidence of long-term renal dysfunction in heart transplant recipients. Fifty-three heart transplant recipients were enrolled in the study. Forty-three patients completed the entire evaluation and follow-up. Glomerular (serum creatinine, creatinine clearance measured, and creatinine clearance calculated) and tubular functions (urinary retinol-binding protein, uRBP) were re-analyzed after 18 months. At the enrollment time, the prevalence of renal failure ranged from 37.7 to 54% according to criteria used to define it (serum creatinine > or = 1.5 mg/dL and creatinine clearance <60 mL/min). Mean serum creatinine was 1.61 ± 1.31 mg/dL (range 0.7 to 9.8 mg/dL) and calculated and measured creatinine clearances were 67.7 ± 25.9 and 61.18 ± 25.04 mL min-1 (1.73 m²)-1, respectively. Sixteen of the 43 patients who completed the follow-up (37.2%) had tubular dysfunction detected by increased levels of uRBP (median 1.06, 0.412-6.396 mg/dL). Eleven of the 16 patients (68.7%) with elevated uRBP had poorer renal function after 18 months of follow-up, compared with only eight of the 27 patients (29.6%) with normal uRBP (RR = 3.47, P = 0.0095). Interestingly, cyclosporine trough levels were not different between patients with or without tubular and glomerular dysfunction. Renal function impairment is common after heart transplantation. Tubular dysfunction, assessed by uRBP, correlates with a worsening of glomerular filtration and can be a useful tool for early detection of renal dysfunction.
Resumo:
The goal of this research – which is to critically analyze current theories and methods of intangible assets evaluation and potentially develop and test new methodology based on the practical example(s) in the IT industry. Having this goal in mind the main research questions in this paper will be: What are advantages and disadvantages of the current practices of measurement intellectual capital or valuation of intangible assets? How to properly measure intellectual capital in IT? Resulting method exhibits a new unique approach to the IC measurement and potentially even larger field of application. Despite the fact that in this particular research, I focused my attention on IT (Software and Internet services cluster – to be exact), the logic behind the method is applicable within any industry since the method is designed to be fully compliant with measurement theory and thus can be properly scaled for any application. Building a new method is a difficult and iterative process: in the current iteration the method stands out as rather a theoretical concept rather than a business tool, however even current concept totally fulfills its purpose as a benchmarking tool for measuring intellectual capital in IT industry.
Resumo:
This thesis examines how content marketing is used in B2B customer acquisition and how content marketing performance measurement system is built and utilized in this context. Literature related to performance measurement, branding and buyer behavior is examined in the theoretical part in order to identify the elements influence on content marketing performance measurement design and usage. Qualitative case study is chosen in order to gain deep understanding of the phenomenon studied. The case company is a Finnish software vendor, which operates in B2B markets and has practiced content marketing for approximately two years. The in-depth interviews were conducted with three employees from marketing department. According to findings content marketing performance measurement system’s infrastructure is based on target market’s decision making processes, company’s own customer acquisition process, marketing automation tool and analytics solutions. The main roles of content marketing performance measurement system are measuring performance, strategy management and learning and improvement. Content marketing objectives in the context of customer acquisition are enhancing brand awareness, influencing brand attitude and lead generation. Both non-financial and financial outcomes are assessed by single phase specific metrics, phase specific overall KPIs and ratings related to lead’s involvement.
Resumo:
Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC) were conducted. The particles were measured at two points: dry and after water soaking. The most significant water holding capacity (7.00 g water/g solid) was achieved by the smaller sized oat fiber. Conversely, the water holding capacity was highest (4.20 g water/g solid) in larger sized beet fiber. There was evidence for water absorption increasing with a decrease in particle size in regards to the same fiber source. Very strong correlations were drawn between particle shape parameters, such as fiber length, straightness, width and hydration properties measured conventionally. The regression analysis provided the opportunity to estimate whether the automated static image analysis method could be an efficient tool in describing the hydration properties of dietary fiber. The application of the method was validated using mathematical model which was verified in comparison to conventional WHC measurement results.
Resumo:
Objectif principal: Il n’est pas démontré que les interventions visant à maîtriser voire modérer la médicamentation de patients atteints d’hypertension peuvent améliorer leur gestion de la maladie. Cette revue systématique propose d’évaluer les programmes de gestion contrôlée de la médicamentation pour l’hypertension, en s’appuyant sur la mesure de l’observance des traitements par les patients (CMGM). Design: Revue systématique. Sources de données: MEDLINE, EMBASE, CENTRAL, résumés de conférences internationales sur l’hypertension et bibliographies des articles pertinents. Méthodes: Des essais contrôlés randomisés (ECR) et des études observationnelles (EO) ont été évalués par 2 réviseurs indépendants. L’évaluation de la qualité (de ce matériel) a été réalisée avec l’aide de l’outil de Cochrane de mesure du risque de biais, et a été estimée selon une échelle à quatre niveaux de qualité Une synthèse narrative des données a été effectuée en raison de l'hétérogénéité importante des études. Résultats: 13 études (8 ECR, 5 EO) de 2150 patients hypertendus ont été prises en compte. Parmi elles, 5 études de CMGM avec l’utilisation de dispositifs électroniques comme seule intervention ont relevé une diminution de la tension artérielle (TA), qui pourrait cependant être expliquée par les biais de mesure. L’amélioration à court terme de la TA sous CMGM dans les interventions complexes a été révélée dans 4 études à qualité faible ou modérée. Dans 4 autres études sur les soins intégrés de qualité supérieure, il n'a pas été possible de distinguer l'impact de la composante CMGM, celle-ci pouvant être compromise par des traitements médicamenteux. L’ensemble des études semble par ailleurs montrer qu’un feed-back régulier au médecin traitant peut être un élément essentiel d’efficacité des traitements CMGM, et peut être facilement assuré par une infirmière ou un pharmacien, grâce à des outils de communication appropriés. Conclusions: Aucune preuve convaincante de l'efficacité des traitements CMGM comme technologie de la santé n’a été établie en raison de designs non-optimaux des études identifiées et des ualités méthodologiques insatisfaisantes de celles-ci. Les recherches futures devraient : suivre les normes de qualité approuvées et les recommandations cliniques actuelles pour le traitement de l'hypertension, inclure des groupes spécifiques de patients avec des problèmes d’attachement aux traitements, et considérer les résultats cliniques et économiques de l'organisation de soins ainsi que les observations rapportées par les patients.
Resumo:
Objective To determine overall, test–retest and inter-rater reliability of posture indices among persons with idiopathic scoliosis. Design A reliability study using two raters and two test sessions. Setting Tertiary care paediatric centre. Participants Seventy participants aged between 10 and 20 years with different types of idiopathic scoliosis (Cobb angle 15 to 60°) were recruited from the scoliosis clinic. Main outcome measures Based on the XY co-ordinates of natural reference points (e.g. eyes) as well as markers placed on several anatomical landmarks, 32 angular and linear posture indices taken from digital photographs in the standing position were calculated from a specially developed software program. Generalisability theory served to estimate the reliability and standard error of measurement (SEM) for the overall, test–retest and inter-rater designs. Bland and Altman's method was also used to document agreement between sessions and raters. Results In the random design, dependability coefficients demonstrated a moderate level of reliability for six posture indices (ϕ = 0.51 to 0.72) and a good level of reliability for 26 posture indices out of 32 (ϕ ≥ 0.79). Error attributable to marker placement was negligible for most indices. Limits of agreement and SEM values were larger for shoulder protraction, trunk list, Q angle, cervical lordosis and scoliosis angles. The most reproducible indices were waist angles and knee valgus and varus. Conclusions Posture can be assessed in a global fashion from photographs in persons with idiopathic scoliosis. Despite the good reliability of marker placement, other studies are needed to minimise measurement errors in order to provide a suitable tool for monitoring change in posture over time.
Resumo:
Ultrasonic is a good tool to investigate the elastic properties of crystals. It enables one to determine all the elastic constants, Poisson’s ratios, volume compressibility and bulk modulus of crystals from velocity measurements. It also enables one to demonstrate the anisotropy of elastic properties by plotting sections of the surfaces of phase velocity, slowness, group velocity, Young’s modulus and linear compressibility along the a-b, b-c and a-c planes. They also help one to understand more about phonon amplification and help to interpret various phenomena associated with ultrasonic wave propagation, thermal conductivity, phonon transport etc. Study of nonlinear optical crystals is very important from an application point of view. Hundreds of new NLO materials are synthesized to meet the requirements for various applications. Inorganic, organic and organometallic or semiorganic classes of compounds have been studied for several reasons. Semiorganic compounds have some advantages over their inorganic and inorganic counterparts with regard to their mechanical properties. High damage resistance, high melting point, good transparency and non-hygroscopy are some of the basic requirements for a material to be suitable for device fabrication. New NLO materials are being synthesized and investigation of the mechanical and elastic properties of these crystals is very important to test the suitability of these materials for technological applications
Resumo:
The photoacoustic technique under heat transmission configuration is used to determine the effect of doping on both the thermal and transport properties of p- and n-type GaAs epitaxial layers grown on GaAs substrate by the molecular beam epitaxial method. Analysis of the data is made on the basis of the theoretical model of Rosencwaig and Gersho. Thermal and transport properties of the epitaxial layers are found by fitting the phase of the experimentally obtained photoacoustic signal with that of the theoretical model. It is observed that both the thermal and transport properties, i.e. thermal diffusivity, diffusion coefficient, surface recombination velocity and nonradiative recombination time, depend on the type of doping in the epitaxial layer. The results clearly show that the photoacoustic technique using heat transmission configuration is an excellent tool to study the thermal and transport properties of epitaxial layers under different doping conditions.
Resumo:
Current methods and techniques used in designing organisational performance measurement systems do not consider the multiple aspects of business processes or the semantics of data generated during the lifecycle of a product. In this paper, we propose an organisational performance measurement systems design model that is based on the semantics of an organisation, business process and products lifecycle. Organisational performance measurement is examined from academic and practice disciplines. The multi-discipline approach is used as a research tool to explore the weaknesses of current models that are used to design organisational performance measurement systems. This helped in identifying the gaps in research and practice concerning the issues and challenges in designing information systems for measuring the performance of an organisation. The knowledge sources investigated include on-going and completed research project reports; scientific and management literature; and practitioners’ magazines.
Resumo:
Field observations of new particle formation and the subsequent particle growth are typically only possible at a fixed measurement location, and hence do not follow the temporal evolution of an air parcel in a Lagrangian sense. Standard analysis for determining formation and growth rates requires that the time-dependent formation rate and growth rate of the particles are spatially invariant; air parcel advection means that the observed temporal evolution of the particle size distribution at a fixed measurement location may not represent the true evolution if there are spatial variations in the formation and growth rates. Here we present a zero-dimensional aerosol box model coupled with one-dimensional atmospheric flow to describe the impact of advection on the evolution of simulated new particle formation events. Wind speed, particle formation rates and growth rates are input parameters that can vary as a function of time and location, using wind speed to connect location to time. The output simulates measurements at a fixed location; formation and growth rates of the particle mode can then be calculated from the simulated observations at a stationary point for different scenarios and be compared with the ‘true’ input parameters. Hence, we can investigate how spatial variations in the formation and growth rates of new particles would appear in observations of particle number size distributions at a fixed measurement site. We show that the particle size distribution and growth rate at a fixed location is dependent on the formation and growth parameters upwind, even if local conditions do not vary. We also show that different input parameters used may result in very similar simulated measurements. Erroneous interpretation of observations in terms of particle formation and growth rates, and the time span and areal extent of new particle formation, is possible if the spatial effects are not accounted for.
Resumo:
Background: Oxidative modification of low-density lipoprotein (LDL) plays a key role in the pathogenesis of atherosclerosis. LDL(-) is present in blood plasma of healthy subjects and at higher concentrations in diseases with high cardiovascular risk, such as familial hypercholesterolemia or diabetes. Methods: We developed and validated a sandwich ELISA for LDL(-) in human plasma using two monoclonal antibodies against LDL(-) that do not bind to native LDL, extensively copper-oxidized LDL or malondialdehyde-modified LDL. The characteristics of assay performance, such as limits of detection and quantification, accuracy, inter- and intra-assay precision were evaluated. The linearity, interferences and stability tests were also performed. Results: The calibration range of the assay is 0.625-20.0 mU/L at 1: 2000 sample dilution. ELISA validation showed intra- and inter- assay precision and recovery within the required limits for immunoassays. The limits of detection and quantification were 0.423 mU/L and 0.517 mU/L LDL(-), respectively. The intra- and inter- assay coefficient of variation ranged from 9.5% to 11.5% and from 11.3% to 18.9%, respectively. Recovery of LDL(-) ranged from 92.8% to 105.1%. Conclusions: This ELISA represents a very practical tool for measuring LDL(-) in human blood for widespread research and clinical sample use. Clin Chem Lab Med 2008; 46: 1769-75.
Resumo:
Direct measurements in the last decades have highlighted a new problem related to the lowering of the Coulomb barrier between the interacting nuclei due to the presence of the ""electron screening"" in the laboratory measurements. It was systematically observed that the presence of the electronic cloud around the interacting ions in measurements of nuclear reactions cross sections at astrophysical energies gives rise to an enhancement of the astrophysical S(E)-factor as lower and lower energies are explored [1]. Moreover, at present Such an effect is not well understood as the value of the potential for screening extracted from these measurements is higher than the tipper limit of theoretical predictions (adiabatic limit). On the other hand, the electron screening potential in laboratory measurement is different from that occurring in stellar plasmas thus the quantity of interest in astrophysics is the so-called ""bare nucleus cross section"". This quantity can only be extrapolated in direct measurements. These are the reasons that led to a considerable growth on interest in indirect measurement techniques and in particular the Trojan Horse Method (THM) [2,3]. Results concerning the bare nucleus cross sections measurements will be shown in several cases of astrophysical interest. In those cases the screening potential evaluated by means of the THM will be compared with the adiabatic limit and results arising from extrapolation in direct measurements.
Resumo:
Lead (Pb) poisoning is preventable but continues to be a public health problem in several countries. Measuring Pb in the surface dental enamel (SDE) using microbiopsies is a rapid, safe, and painless procedure. There are different protocols to perform these microbiopsies, but the reliability of dental enamel lead levels (DELL) determination is dependent upon biopsy depth (BD). It is established that DELL decrease from the outermost superficial layer to the inner layer of dental enamel. The aim of this study was to determine DELL obtained by two different microbiopsy techniques on SDE termed protocol I and protocol II. Two consecutive enamel layers were removed from the same subject group (n = 138) for both protocols. Protocol I consisted of a biopsied site with a diameter of 4 mm after the application of 10 l HCl for 35 s. Protocol II involved a biopsied site of 1.6 mm diameter after application of 5 l HCl for 20 s. The results demonstrated that there were no significant differences for BD and DELL between homologous teeth using protocol I. However, there was a significant difference between DELL in the first and second layers using both protocols. Further, the BD in protocol II overestimated DELL values. In conclusion, SDE analyzed by microbiopsy is a reliable biomarker in protocol I, but the chemical method to calculate BD in protocol II appeared to be inadequate for measurement of DELL. Thus, DELL could not be compared among studies that used different methodologies for SDE microbiopsies.