989 resultados para Metrics Measurement
Resumo:
Metrics estimate the quality of different aspects of software. In particular, cohesion indicates how well the parts of a system hold together. A metric to evaluate class cohesion is important in object-oriented programming because it gives an indication of a good design of classes. There are several proposals of metrics for class cohesion but they have several problems (for instance, low discrimination). In this paper, a new metric to evaluate class cohesion is proposed, called SCOM, which has several relevant features. It has an intuitive and analytical formulation, what is necessary to apply it to large-size software systems. It is normalized to produce values in the range [0..1], thus yielding meaningful values. It is also more sensitive than those previously reported in the literature. The attributes and methods used to evaluate SCOM are unambiguously stated. SCOM has an analytical threshold, which is a very useful but rare feature in software metrics. We assess the metric with several sample cases, showing that it gives more sensitive values than other well know cohesion metrics.
Resumo:
Digital media have contributed to significant disruptions in the business of audience measurement. Television broadcasters have long relied on simple and authoritative measures of who is watching what. The demand for ratings data, as a common currency in transactions involving advertising and program content, will likely remain, but accompanying measurements of audience engagement with media content would also be of value. Today's media environment increasingly includes social media and second-screen use, providing a data trail that affords an opportunity to measure engagement. If the limitations of using social media to indicate audience engagement can be overcome, social media use may allow for quantitative and qualitative measures of engagement. Raw social media data must be contextualized, and it is suggested that tools used by sports analysts be incorporated to do so. Inspired by baseball's Sabremetrics, the authors propose Telemetrics in an attempt to separate actual performance from contextual factors. Telemetrics facilitates measuring audience activity in a manner controlling for factors such as time slot, network, and so forth. It potentially allows both descriptive and predictive measures of engagement.
Resumo:
The advantages of using a balanced approach to measurement of overall organisational performance are well-known. We examined the effects of a balanced approach in the more specific domain of measuring innovation effectiveness in 144 small to medium sized companies in Australia and Thailand. We found that there were no differences in the metrics used by Australian and Thai companies. In line with our hypotheses, we found that those SMEs that took a balanced approach were more likely to perceive benefits of implemented innovations than those that used only a financial approach to measurement. The perception of benefits then had a subsequent effect on overall attitudes towards innovation. The study shows the importance of measuring both financial and non-financial indicators of innovation effectiveness within SMEs and discusses ways in which these can be conducted with limited resources.
Resumo:
Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study.
Resumo:
Increasing global competition, rapid technological changes, advances in manufacturing and information technology and discerning customers are forcing supply chains to adopt improvement practices that enable them to deliver high quality products at a lower cost and in a shorter period of time. A lean initiative is one of the most effective approaches toward achieving this goal. In the lean improvement process, it is critical to measure current and desired performance level in order to clearly evaluate the lean implementation efforts. Many attempts have tried to measure supply chain performance incorporating both quantitative and qualitative measures but failed to provide an effective method of measuring improvements in performances for dynamic lean supply chain situations. Therefore, the necessity of appropriate measurement of lean supply chain performance has become imperative. There are many lean tools available for supply chains; however, effectiveness of a lean tool depends on the type of the product and supply chain. One tool may be highly effective for a supply chain involved in high volume products but may not be effective for low volume products. There is currently no systematic methodology available for selecting appropriate lean strategies based on the type of supply chain and market strategy This thesis develops an effective method to measure the performance of supply chain consisting of both quantitative and qualitative metrics and investigates the effects of product types and lean tool selection on the supply chain performance Supply chain performance matrices and the effects of various lean tools over performance metrics mentioned in the SCOR framework have been investigated. A lean supply chain model based on the SCOR metric framework is then developed where non- lean and lean as well as quantitative and qualitative metrics are incorporated in appropriate metrics. The values of appropriate metrics are converted into triangular fuzzy numbers using similarity rules and heuristic methods. Data have been collected from an apparel manufacturing company for multiple supply chain products and then a fuzzy based method is applied to measure the performance improvements in supply chains. Using the fuzzy TOPSIS method, which chooses an optimum alternative to maximise similarities with positive ideal solutions and to minimise similarities with negative ideal solutions, the performances of lean and non- lean supply chain situations for three different apparel products have been evaluated. To address the research questions related to effective performance evaluation method and the effects of lean tools over different types of supply chains; a conceptual framework and two hypotheses are investigated. Empirical results show that implementation of lean tools have significant effects over performance improvements in terms of time, quality and flexibility. Fuzzy TOPSIS based method developed is able to integrate multiple supply chain matrices onto a single performance measure while lean supply chain model incorporates qualitative and quantitative metrics. It can therefore effectively measure the improvements for supply chain after implementing lean tools. It is demonstrated that product types involved in the supply chain and ability to select right lean tools have significant effect on lean supply chain performance. Future study can conduct multiple case studies in different contexts.
Resumo:
Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. On the other hand, the metrics to quantify process compliance have only been defined recently. A major criticism points to the fact that existing measures appear to be unintuitive. In this paper, we trace back this problem to a more foundational question: which notion of behavioural equivalence is appropriate for discussing compliance? We present a quantification approach based on behavioural profiles, which is a process abstraction mechanism. Behavioural profiles can be regarded as weaker than existing equivalence notions like trace equivalence, and they can be calculated efficiently. As a validation, we present a respective implementation that measures compliance of logs against a normative process model. This implementation is being evaluated in a case study with an international service provider.
Resumo:
Performance measurement and management (PMM) is a management and research paradox. On one hand, it provides management with many critical, useful, and needed functions. Yet, there is evidence that it can adversely affect performance. This paper attempts to resolve this paradox by focusing on the issue of "fit". That is, in today's dynamic and turbulent environment, changes in either the business environment or the business strategy can lead to the need for new or revised measures and metrics. Yet, if these measures and metrics are either not revised or incorrectly revised, then we can encounter situations where what the firm wants to achieve (as communicated by its strategy) and what the firm measures and rewards are not synchronised with each other (i.e., there is a lack of "fit"). This situation can adversely affect the ability of the firm to compete. The issue of fit is explored using a three phase Delphi approach. Initially intended to resolve this first paradox, the Delphi study identified another paradox - one in which the researchers found that in a dynamic environment, firms do revise their strategies, yet, often the PMM system is not changed. To resolve this second paradox, the paper proposes a new framework - one that shows that under certain conditions, the observed metrics "lag" is not only explainable but also desirable. The findings suggest a need to recast the accepted relationship between strategy and PMM system and the output included the Performance Alignment Matrix that had utility for managers. © 2013 .
Resumo:
Growing interest in inference and prediction of network characteristics is justified by its importance for a variety of network-aware applications. One widely adopted strategy to characterize network conditions relies on active, end-to-end probing of the network. Active end-to-end probing techniques differ in (1) the structural composition of the probes they use (e.g., number and size of packets, the destination of various packets, the protocols used, etc.), (2) the entity making the measurements (e.g. sender vs. receiver), and (3) the techniques used to combine measurements in order to infer specific metrics of interest. In this paper, we present Periscope: a Linux API that enables the definition of new probing structures and inference techniques from user space through a flexible interface. PeriScope requires no support from clients beyond the ability to respond to ICMP ECHO REQUESTs and is designed to minimize user/kernel crossings and to ensure various constraints (e.g., back-to-back packet transmissions, fine-grained timing measurements) We show how to use Periscope for two different probing purposes, namely the measurement of shared packet losses between pairs of endpoints and for the measurement of subpath bandwidth. Results from Internet experiments for both of these goals are also presented.
Resumo:
As many as 20-70% of patients undergoing breast conserving surgery require repeat surgeries due to a close or positive surgical margin diagnosed post-operatively [1]. Currently there are no widely accepted tools for intra-operative margin assessment which is a significant unmet clinical need. Our group has developed a first-generation optical visible spectral imaging platform to image the molecular composition of breast tumor margins and has tested it clinically in 48 patients in a previously published study [2]. The goal of this paper is to report on the performance metrics of the system and compare it to clinical criteria for intra-operative tumor margin assessment. The system was found to have an average signal to noise ratio (SNR) >100 and <15% error in the extraction of optical properties indicating that there is sufficient SNR to leverage the differences in optical properties between negative and close/positive margins. The probe had a sensing depth of 0.5-2.2 mm over the wavelength range of 450-600 nm which is consistent with the pathologic criterion for clear margins of 0-2 mm. There was <1% cross-talk between adjacent channels of the multi-channel probe which shows that multiple sites can be measured simultaneously with negligible cross-talk between adjacent sites. Lastly, the system and measurement procedure were found to be reproducible when evaluated with repeated measures, with a low coefficient of variation (<0.11). The only aspect of the system not optimized for intra-operative use was the imaging time. The manuscript includes a discussion of how the speed of the system can be improved to work within the time constraints of an intra-operative setting.
Resumo:
Um levantamento de 320 executivos de marketing feito pelo Conselho CMO e divulgado em junho de 2004 indicou que poucas companhias de alta tecnologia (menos de 20% das empresas entrevistadas) têm desenvolvido medidas e métricas úteis e expressivas para as suas organizações de marketing. Porém a pesquisa também revelou que companhias que estabeleceram medidas formais e compreensivas atingiram resultados financeiros superiores e tiveram mais confiança do CEO na função de marketing. Esta dissertação provê uma visão geral da informação precisa para executivos de marketing entenderem e implementarem processos para medição de performance de marketing (MPM) em suas organizações. Ela levanta questões para gerentes de marketing na industria de alta tecnologia com respeito às demandas para maior responsabilidade final, valor de medição para o melhoramento dos processos de marketing, iniciativas para determinar a lucratividade dos investimentos em marketing, e a importância das atividades de marketing nos relatórios corporativos. Esta dissertação defende a implementação de MPM, mapeando seus benefícios de medição para ambos gerentes de marketing e as suas empresas. o trabalho logo explora alguns conceitos gerais de medição de marketing e investiga algumas abordagens a MPM propostas pela industria, pela comunidade acadêmica, e pelos analistas. Finalmente, a dissertação descreve algumas práticas que todo gerente de marketing na industria de alta tecnologia deve considerar quando adotando MPM. As sugestões são gerais, mas devem familiarizar o leitor com as informações precisas para habilitar processos e rigor na sua organização com respeito a MPM.
Resumo:
AIMS Transcatheter mitral valve replacement (TMVR) is an emerging technology with the potential to treat patients with severe mitral regurgitation at excessive risk for surgical mitral valve surgery. Multimodal imaging of the mitral valvular complex and surrounding structures will be an important component for patient selection for TMVR. Our aim was to describe and evaluate a systematic multi-slice computed tomography (MSCT) image analysis methodology that provides measurements relevant for transcatheter mitral valve replacement. METHODS AND RESULTS A systematic step-by-step measurement methodology is described for structures of the mitral valvular complex including: the mitral valve annulus, left ventricle, left atrium, papillary muscles and left ventricular outflow tract. To evaluate reproducibility, two observers applied this methodology to a retrospective series of 49 cardiac MSCT scans in patients with heart failure and significant mitral regurgitation. For each of 25 geometrical metrics, we evaluated inter-observer difference and intra-class correlation. The inter-observer difference was below 10% and the intra-class correlation was above 0.81 for measurements of critical importance in the sizing of TMVR devices: the mitral valve annulus diameters, area, perimeter, the inter-trigone distance, and the aorto-mitral angle. CONCLUSIONS MSCT can provide measurements that are important for patient selection and sizing of TMVR devices. These measurements have excellent inter-observer reproducibility in patients with functional mitral regurgitation.
Resumo:
Context: Measurement is crucial and important to empirical software engineering. Although reliability and validity are two important properties warranting consideration in measurement processes, they may be influenced by random or systematic error (bias) depending on which metric is used. Aim: Check whether, the simple subjective metrics used in empirical software engineering studies are prone to bias. Method: Comparison of the reliability of a family of empirical studies on requirements elicitation that explore the same phenomenon using different design types and objective and subjective metrics. Results: The objectively measured variables (experience and knowledge) tend to achieve more reliable results, whereas subjective metrics using Likert scales (expertise and familiarity) tend to be influenced by systematic error or bias. Conclusions: Studies that predominantly use variables measured subjectively, like opinion polls or expert opinion acquisition.
Resumo:
The amplification of demand variation up a supply chain widely termed ‘the Bullwhip Effect’ is disruptive, costly and something that supply chain management generally seeks to minimise. Originally attributed to poor system design; deficiencies in policies, organisation structure and delays in material and information flow all lead to sub-optimal reorder point calculation. It has since been attributed to exogenous random factors such as: uncertainties in demand, supply and distribution lead time but these causes are not exclusive as academic and operational studies since have shown that orders and/or inventories can exhibit significant variability even if customer demand and lead time are deterministic. This increase in the range of possible causes of dynamic behaviour indicates that our understanding of the phenomenon is far from complete. One possible, yet previously unexplored, factor that may influence dynamic behaviour in supply chains is the application and operation of supply chain performance measures. Organisations monitoring and responding to their adopted key performance metrics will make operational changes and this action may influence the level of dynamics within the supply chain, possibly degrading the performance of the very system they were intended to measure. In order to explore this a plausible abstraction of the operational responses to the Supply Chain Council’s SCOR® (Supply Chain Operations Reference) model was incorporated into a classic Beer Game distribution representation, using the dynamic discrete event simulation software Simul8. During the simulation the five SCOR Supply Chain Performance Attributes: Reliability, Responsiveness, Flexibility, Cost and Utilisation were continuously monitored and compared to established targets. Operational adjustments to the; reorder point, transportation modes and production capacity (where appropriate) for three independent supply chain roles were made and the degree of dynamic behaviour in the Supply Chain measured, using the ratio of the standard deviation of upstream demand relative to the standard deviation of the downstream demand. Factors employed to build the detailed model include: variable retail demand, order transmission, transportation delays, production delays, capacity constraints demand multipliers and demand averaging periods. Five dimensions of supply chain performance were monitored independently in three autonomous supply chain roles and operational settings adjusted accordingly. Uniqueness of this research stems from the application of the five SCOR performance attributes with modelled operational responses in a dynamic discrete event simulation model. This project makes its primary contribution to knowledge by measuring the impact, on supply chain dynamics, of applying a representative performance measurement system.
Resumo:
PURPOSE: To provide a consistent standard for the evaluation of different types of presbyopic correction. SETTING: Eye Clinic, School of Life and Health Sciences, Aston University, Birmingham, United Kingdom. METHODS: Presbyopic corrections examined were accommodating intraocular lenses (IOLs), simultaneous multifocal and monovision contact lenses, and varifocal spectacles. Binocular near visual acuity measured with different optotypes (uppercase letters, lowercase letters, and words) and reading metrics assessed with the Minnesota Near Reading chart (reading acuity, critical print size [CPS], CPS reading speed) were intercorrelated (Pearson product moment correlations) and assessed for concordance (intraclass correlation coefficients [ICC]) and agreement (Bland-Altman analysis) for indication of clinical usefulness. RESULTS: Nineteen accommodating IOL cases, 40 simultaneous contact lens cases, and 38 varifocal spectacle cases were evaluated. Other than CPS reading speed, all near visual acuity and reading metrics correlated well with each other (r>0.70, P<.001). Near visual acuity measured with uppercase letters was highly concordant (ICC, 0.78) and in close agreement with lowercase letters (+/- 0.17 logMAR). Near word acuity agreed well with reading acuity (+/- 0.16 logMAR), which in turn agreed well with near visual acuity measured with uppercase letters 0.16 logMAR). Concordance (ICC, 0.18 to 0.46) and agreement (+/- 0.24 to 0.30 logMAR) of CPS with the other near metrics was moderate. CONCLUSION: Measurement of near visual ability in presbyopia should be standardized to include assessment of near visual acuity with logMAR uppercase-letter optotypes, smallest logMAR print size that maintains maximum reading speed (CPS), and reading speed. J Cataract Refract Surg 2009; 35:1401-1409 (C) 2009 ASCRS and ESCRS