940 resultados para Performance metrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous multicast research often makes commonly accepted but unverifed assumptions on network topologies and group member distribution in simulation studies. In this paper, we propose a framework to systematically evaluate multicast performance for different protocols. We identify a series of metrics, and carry out extensive simulation studies on these metrics with different topological models and group member distributions for three case studies. Our simulation results indicate that realistic topology and group membership models are crucial to accurate multicast performance evaluation. These results can provide guidance for multicast researchers to perform realistic simulations, and facilitate the design and development of multicast protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills, as well as their correlation with the different abilities sought to measure. METHODS: A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, three novel tasks for surgical assessment were designed. Face and construct validation study was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. RESULTS: Time, path length and depth showed construct validity for all three tasks. Motion smoothness and idle time also showed validity for tasks involving bi-manual coordination and tasks requiring a more tactical approach respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. CONCLUSION: Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional kinematic analysis provides quantitative assessment of upper limb motion and is used as an outcome measure to evaluate movement disorders. The aim of the present study is to present a set of kinematic metrics for quantifying characteristics of movement performance and the functional status of the subject during the execution of the activity of daily living (ADL) of drinking from a glass. Then, the objective is to apply these metrics in healthy people and a population with cervical spinal cord injury (SCI), and to analyze the metrics ability to discriminate between healthy and pathologic people. 19 people participated in the study: 7 subjects with metameric level C6 tetraplegia, 4 subjects with metameric level C7 tetraplegia and 8 healthy subjects. The movement was recorded with a photogrammetry system. The ADL of drinking was divided into a series of clearly identifiable phases to facilitate analysis. Metrics describing the time of the reaching phase, the range of motion of the joints analyzed, and characteristics of movement performance such as the efficiency, accuracy and smoothness of the distal segment and inter-joint coordination were obtained. The performance of the drinking task was more variable in people with SCI compared to the control group in relation to the metrics measured. Reaching time was longer in SCI groups. The proposed metrics showed capability to discriminate between healthy and pathologic people. Relative deficits in efficiency were larger in SCI people than in controls. These metrics can provide useful information in a clinical setting about the quality of the movement performed by healthy and SCI people during functional activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vertical-cavity surface-emitting lasers (VCSELs) and microlenses can be used to implement free space optical interconnects (FSOIs) which do not suffer from the bandwidth limitations inherent in metallic interconnects. A comprehensive link equation describing the effects of both optical and electrical noise is introduced. We have evaluated FSOI performance by examining the following metrics: the space-bandwidth product (SBP), describing the density of channels and aggregate bandwidth that can be achieved, and the carrier-to-noise ratio (CNR), which represents the relative strength of the carrier signal. The mode expansion method (MEM) was used to account for the primary cause of optical noise: laser beam diffraction. While the literature commonly assumes an ideal single-mode laser beam, we consider the experimentally determined multimodal structure of a VCSEL beam in our calculations. It was found that maximum achievable interconnect length and density for a given CNR was significantly reduced when the higher order transverse modes were present in Simulations. However, the Simulations demonstrate that free-space optical interconnects are still a suitable solution for the communications bottleneck, despite the adverse effects introduced by transverse modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The basis of this work was to investigate the relative environmental impacts of various power generators knowing that all plants are located in totally different environments and that different receptors will experience different impacts. Based on IChemE sustainability metrics paradigm, we calculated potential environmental indicators (P-EI) that represent the environmental burden of masses of potential pollutants discharged into different receiving media. However, a P-EI may not be of significance, as it may not be expressed at all in different conditions, so to try and include some receiver significance we developed a methodology to take into account some specific environmental indicators (S-EI) that refer to the environmental attributes of a specific site. In this context, we acquired site specific environmental data related to the airsheds and water catchment areas in different locations for a limited number of environmental indicators such as human health (carcinogenic) effects, atmospheric acidification, photochemical (ozone) smog and eutrophication. The S-EI results from this particular analysis show that atmospheric acidification has highest impact value while health risks due to fly ash emissions are considered not to be as significant. This is due to the fact that many coal power plants in Australia are located in low population density air sheds. The contribution of coal power plants to photochemical (ozone) smog and eutrophication were not significant. In this study, we have considered emission related data trends to reflect technology performance (e.g., P-EI indicators) while a real sustainability metric can be associated only with the specific environmental conditions of the relevant sites (e.g., S-EI indicators).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: As light-emitting diodes become more common as the light source for low vision aids, the effect of illumination colour temperature on magnifier reading performance was investigated. Methods: Reading ability (maximum reading speed, critical print size, threshold near visual acuity) using Radner charts and subjective preference was assessed for 107 participants with visual impairment using three stand magnifiers with light emitting diode illumination colour temperatures of 2,700 K, 4,500 K and 6,000 K. The results were compared with distance visual acuity, prescribed magnification, age and the primary cause of visual impairment. Results: Reading speed, critical print size and near visual acuity were unaffected by illumination colour temperature (p > 0.05). Reading metrics decreased with worsening acuity and higher levels of prescribed magnification but acuity was unaffected by age. Each colour temperature was preferred and disliked by a similar number of patients and was unrelated to distance visual acuity, prescribed magnification and age (p > 0.05). Patients had better near acuity (p = 0.002), critical print size (p = 0.034) and maximum reading speed (p <0.001), and the improvement in near from distance acuity was greater (p = 0.004) with their preferred rather than least-liked colour temperature illumination. Conclusion: A range of colour temperature illuminations should be offered to all visually impaired individuals prescribed with an optical magnifier for near tasks to optimise subjective and objective benefits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To what extent does competitive entry create a structural change in key marketing metrics? New players may just be a temporal nuisance to incumbents, but could also fundamentally change the latter's performance evolution, or induce them to permanently alter their spending levels and/or pricing decisions. Similarly, the addition of a new marketing channel could permanently shift shopping preferences, or could just create a short-lived migration from existing channels. The steady-state impact of a given entry or channel addition on various marketing metrics is intrinsically an empirical issue for which we need an appropriate testing procedure. In this study, we introduce a testing sequence that allows for the endogenous determination of potential change (break) locations, thereby accounting for lead and/or lagged effects of the introduction of interest. By not restricting the number of potential breaks to one (as is commonly done in the marketing literature), we quantify the impact of the new entrant(s) while controlling for other events that may have taken place in the market. We illustrate the methodology in the context of the Dutch television advertising market, which was characterized by the entry of several late movers. We find that the steady-state growth of private incumbents' revenues was slowed by the quasi-simultaneous entry of three new players. Contrary to industry observers' expectations, such a slowdown was not experienced in the related markets of print and radio advertising.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the introduction of the Net Promoter concept there has been a vivid and ongoing debate among academics and practitioners about the performance of the Net Promoter Score (NPS) in comparison to other customer metrics, such as customer satisfaction, to predict company growth rates. We report results from a study using data from customers and firms in the Netherlands on the relationship between different satisfaction and loyalty metrics as well as the NPS with sales revenue growth, gross margins and net operating cash flows. We find that all metrics perform equally well in predicting current gross margins and current sales revenue growth and equally poor for predicting future sales growth and gross margins as well as current and future net cash flows. The NPS is neither superior nor inferior to other metrics. Taken together, our study suggests that the predictive capability of customer metrics, such as NPS, for future company growth rates is limited. © 2013 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose ‐ This study provides empirical evidence for the contextuality of marketing performance assessment (MPA) systems. It aims to introduce a taxonomical classification of MPA profiles based on the relative emphasis placed on different dimensions of marketing performance in different companies and business contexts. Design/methodology/approach ‐ The data used in this study (n=1,157) were collected using a web-based questionnaire, targeted to top managers in Finnish companies. Two multivariate data analysis techniques were used to address the research questions. First, dimensions of marketing performance underlying the current MPA systems were identified through factor analysis. Second, a taxonomy of different profiles of marketing performance measurement was created by clustering respondents based on the relative emphasis placed on the dimensions and characterizing them vis-á-vis contextual factors. Findings ‐ The study identifies nine broad dimensions of marketing performance that underlie the MPA systems in use and five MPA profiles typical of companies of varying sizes in varying industries, market life cycle stages, and competitive positions associated with varying levels of market orientation and business performance. The findings support the previously conceptual notion of contextuality in MPA and provide empirical evidence for the factors that affect MPA systems in practice. Originality/value ‐ The paper presents the first field study of current MPA systems focusing on combinations of metrics in use. The findings of the study provide empirical support for the contextuality of MPA and form a classification of existing contextual systems suitable for benchmarking purposes. Limited evidence for performance differences between MPA profiles is also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Premium Intraocular Lenses (IOLs) such as toric IOLs, multifocal IOLs (MIOLs) and accommodating IOLs (AIOLs) can provide better refractive and visual outcomes compared to standard monofocal designs, leading to greater levels of post-operative spectacle independence. The principal theme of this thesis relates to the development of new assessment techniques that can help to improve future premium IOL design. IOLs designed to correct astigmatism form the focus of the first part of the thesis. A novel toric IOL design was devised to decrease the effect of toric rotation on patient visual acuity, but found to have neither a beneficial or detrimental impact on visual acuity retention. IOL tilt, like rotation, may curtail visual performance; however current IOL tilt measurement techniques require the use of specialist equipment not readily available in most ophthalmological clinics. Thus a new idea that applied Pythagoras’s theory to digital images of IOL optic symmetricality in order to calculate tilt was proposed, and shown to be both accurate and highly repeatable. A literature review revealed little information on the relationship between IOL tilt, decentration and rotation and so this was examined. A poor correlation between these factors was found, indicating they occur independently of each other. Next, presbyopia correcting IOLs were investigated. The light distribution of different MIOLs and an AIOL was assessed using perimetry, to establish whether this could be used to inform optimal IOL design. Anticipated differences in threshold sensitivity between IOLs were not however found, thus perimetry was concluded to be ineffective in mapping retinal projection of blur. The observed difference between subjective and objective measures of accommodation, arising from the influence of pseudoaccommodative factors, was explored next to establish how much additional objective power would be required to restore the eye’s focus with AIOLs. Blur tolerance was found to be the key contributor to the ocular depth of focus, with an approximate dioptric influence of 0.60D. Our understanding of MIOLs may be limited by the need for subjective defocus curves, which are lengthy and do not permit important additional measures to be undertaken. The use of aberrometry to provide faster objective defocus curves was examined. Although subjective and objective measures related well, the peaks of the MIOL defocus curve profile were not evident with objective prediction of acuity, indicating a need for further refinement of visual quality metrics based on ocular aberrations. The experiments detailed in the thesis evaluate methods to improve visual performance with toric IOLs. They also investigate new techniques to allow more rapid post-operative assessment of premium IOLs, which could allow greater insights to be obtained into several aspects of visual quality, in order to optimise future IOL design and ultimately enhance patient satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Video streaming via Transmission Control Protocol (TCP) networks has become a popular and highly demanded service, but its quality assessment in both objective and subjective terms has not been properly addressed. In this paper, based on statistical analysis a full analytic model of a no-reference objective metric, namely pause intensity (PI), for video quality assessment is presented. The model characterizes the video playout buffer behavior in connection with the network performance (throughput) and the video playout rate. This allows for instant quality measurement and control without requiring a reference video. PI specifically addresses the need for assessing the quality issue in terms of the continuity in the playout of TCP streaming videos, which cannot be properly measured by other objective metrics such as peak signal-to-noise-ratio, structural similarity, and buffer underrun or pause frequency. The performance of the analytical model is rigidly verified by simulation results and subjective tests using a range of video clips. It is demonstrated that PI is closely correlated with viewers' opinion scores regardless of the vastly different composition of individual elements, such as pause duration and pause frequency which jointly constitute this new quality metric. It is also shown that the correlation performance of PI is consistent and content independent. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.