909 resultados para performance measurement
Resumo:
This paper reports a meta-analysis that examines the relationship between leader-member exchange (LMX) relationship quality and a multidimensional model of work performance (task, citizenship, and counterproductive performance). The results show a positive relationship between LMX and task performance (146 samples, ρ = .30) as well as citizenship performance (97 samples, ρ = .34), and negatively with counterproductive performance (19 samples, ρ = -.24). Of note, there was a positive relationship between LMX and objective task performance (20 samples, ρ = .24). Trust, motivation, empowerment, and job satisfaction mediated the relationship between LMX and task and citizenship performance with trust in the leader having the largest effect. There was no difference due to LMX measurement instrument (e.g., LMX7, LMX-MDM). Overall, the relationship between LMX and performance was weaker when (a) measures were obtained from a different source or method and (b) LMX was measured by the follower than the leader (with common source- and method-biased effects stronger for leader-rated LMX quality). Finally, there was evidence for LMX leading to task performance but not for reverse or reciprocal directions of effects.
Resumo:
Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using multiple sensors is inherently more accurate than using a single pressure reading to estimate depth. Second, common mode temperature induced wavelength shifts in the individual sensors are automatically compensated. Thirdly, temperature induced changes in the sensor pressure sensitivity are also compensated. Fourthly, the approach provides the possibility to detect and compensate for malfunctioning sensors. Finally, the system is immune to changes in the density of the monitored fluid and even to changes in the effective force of gravity, as might be obtained in an aerospace application. The performance of an individual sensor was characterized and displays a sensitivity (54 pm/cm), enhanced by more than a factor of 2 when compared to a sensor head configuration based on a silica FBG published in the literature, resulting from the much lower elastic modulus of POF. Furthermore, the temperature/humidity behavior and measurement resolution were also studied in detail. The proposed configuration also displays a highly linear response, high resolution and good repeatability. The results suggest the new configuration can be a useful tool in many different applications, such as aircraft fuel monitoring, and biochemical and environmental sensing, where accuracy and stability are fundamental. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
External metrology systems are increasingly being integrated with traditional industrial articulated robots, especially in the aerospace industries, to improve their absolute accuracy for precision operations such as drilling, machining and jigless assembly. While currently most of the metrology assisted robotics control systems are limited in their position update rate, such that the robot has to be stopped in order to receive a metrology coordinate update, some recent efforts are addressed toward controlling robots using real-time metrology data. The indoor GPS is one of the metrology systems that may be used to provide real-time 6DOF data to a robot controller. Even if there is a noteworthy literature dealing with the evaluation of iGPS performance, there is, however, a lack of literature on how well the iGPS performs under dynamic conditions. This paper presents an experimental evaluation of the dynamic measurement performance of the iGPS, tracking the trajectories of an industrial robot. The same experiment is also repeated using a laser tracker. Besides the experiment results presented, this paper also proposes a novel method for dynamic repeatability comparisons of tracking instruments. © 2011 Springer-Verlag London Limited.
Resumo:
Premium Intraocular Lenses (IOLs) such as toric IOLs, multifocal IOLs (MIOLs) and accommodating IOLs (AIOLs) can provide better refractive and visual outcomes compared to standard monofocal designs, leading to greater levels of post-operative spectacle independence. The principal theme of this thesis relates to the development of new assessment techniques that can help to improve future premium IOL design. IOLs designed to correct astigmatism form the focus of the first part of the thesis. A novel toric IOL design was devised to decrease the effect of toric rotation on patient visual acuity, but found to have neither a beneficial or detrimental impact on visual acuity retention. IOL tilt, like rotation, may curtail visual performance; however current IOL tilt measurement techniques require the use of specialist equipment not readily available in most ophthalmological clinics. Thus a new idea that applied Pythagoras’s theory to digital images of IOL optic symmetricality in order to calculate tilt was proposed, and shown to be both accurate and highly repeatable. A literature review revealed little information on the relationship between IOL tilt, decentration and rotation and so this was examined. A poor correlation between these factors was found, indicating they occur independently of each other. Next, presbyopia correcting IOLs were investigated. The light distribution of different MIOLs and an AIOL was assessed using perimetry, to establish whether this could be used to inform optimal IOL design. Anticipated differences in threshold sensitivity between IOLs were not however found, thus perimetry was concluded to be ineffective in mapping retinal projection of blur. The observed difference between subjective and objective measures of accommodation, arising from the influence of pseudoaccommodative factors, was explored next to establish how much additional objective power would be required to restore the eye’s focus with AIOLs. Blur tolerance was found to be the key contributor to the ocular depth of focus, with an approximate dioptric influence of 0.60D. Our understanding of MIOLs may be limited by the need for subjective defocus curves, which are lengthy and do not permit important additional measures to be undertaken. The use of aberrometry to provide faster objective defocus curves was examined. Although subjective and objective measures related well, the peaks of the MIOL defocus curve profile were not evident with objective prediction of acuity, indicating a need for further refinement of visual quality metrics based on ocular aberrations. The experiments detailed in the thesis evaluate methods to improve visual performance with toric IOLs. They also investigate new techniques to allow more rapid post-operative assessment of premium IOLs, which could allow greater insights to be obtained into several aspects of visual quality, in order to optimise future IOL design and ultimately enhance patient satisfaction.
Resumo:
Video streaming via Transmission Control Protocol (TCP) networks has become a popular and highly demanded service, but its quality assessment in both objective and subjective terms has not been properly addressed. In this paper, based on statistical analysis a full analytic model of a no-reference objective metric, namely pause intensity (PI), for video quality assessment is presented. The model characterizes the video playout buffer behavior in connection with the network performance (throughput) and the video playout rate. This allows for instant quality measurement and control without requiring a reference video. PI specifically addresses the need for assessing the quality issue in terms of the continuity in the playout of TCP streaming videos, which cannot be properly measured by other objective metrics such as peak signal-to-noise-ratio, structural similarity, and buffer underrun or pause frequency. The performance of the analytical model is rigidly verified by simulation results and subjective tests using a range of video clips. It is demonstrated that PI is closely correlated with viewers' opinion scores regardless of the vastly different composition of individual elements, such as pause duration and pause frequency which jointly constitute this new quality metric. It is also shown that the correlation performance of PI is consistent and content independent. © 2013 IEEE.
Resumo:
Based on a robust analysis of the existing literature on performance appraisal (PA), this paper makes a case for an integrated framework of effectiveness of performance appraisal (EPA). To achieve this, it draws on the expanded view of measurement criteria of EPA, i.e. purposefulness, fairness and accuracy, and identifies their relationships with ratee reactions. The analysis reveals that the expanded view of purposefulness includes more theoretical anchors for the purposes of PA and relates to various aspects of human resource functions, e.g. feedback and goal orientation. The expansion in the PA fairness criterion suggests certain newly established nomological networks, which were ignored in the past, e.g. the relationship between distributive fairness and organization-referenced outcomes. Further, refinements in PA accuracy reveal a more comprehensive categorization of rating biases. Coherence among measurement criteria has resulted in a ratee reactions-based integrated framework, which should be useful for both researchers and practitioners.
Resumo:
This chapter provides information on the use of Performance Improvement Management Software (PIMDEA). This advanced DEA software enables users to make the best possible analysis of the data, using the latest theoretical developments in Data Envelopment Analysis (DEA). PIM-DEA software gives full capacity to assess efficiency and productivity, set targets, identify benchmarks, and much more, allowing users to truly manage the performance of organizational units. PIM-DEA is easy to use and powerful, and it has an extensive range of the most up-to-date DEA models and which can handle large sets of data.
Resumo:
Creative sourcing strategies, designed to extract more value from the supply base, have become a competitive, strategic differentiator. To fuel creativity, companies install sourcing teams that can capitalize on the specialized knowledge and expertise of their employees across the company. This article introduces the concept of a team creativity climate (TCC) - team members' shared perceptions of their joint policies, procedures, and practices with respect to developing creative sourcing strategies – as a means to address the unique challenges associated with a collective, cross-functional approach to develop value-enhancing sourcing strategies. Using a systematic scale development process that validates the proposed concept, the authors confirm its ability to predict sourcing team performance, and suggest some research avenues extending from this concept.
Resumo:
A cikk kiindulópontja az a tény, hogy a számvitel, azon belül is a pénzügyi beszámolás alapvető feladata döntésekhez hasznosítható információk nyújtása a vállalkozásokkal kapcsolatba kerülő érintettek számára. A gazdasági jelenségek leképezése, számviteli transzformációja során létrejövő adatok információként való hasznosításának feltétele, hogy a pénzügyi kimutatások felhasználói tisztában legyenek a leképezés mögöttes feltételezéseivel. A cikk első része a mérés általános definíciójából kiindulva mutatja be a számviteli mérés és értékelés fogalmát, ezek összefüggését, alapvető jellemzőit. Ezt követően a pénzügyi beszámolásban jelenleg érvényesülő értékelési keretrendszert vázolja fel a nemzetközi (IFRS), illetve a magyar szabályozásból kiindulva. A cikk harmadik része a szabályozás mögött meghúzódó elméleti összefüggéseket vizsgálja, kitérve a számviteli mérés és a pénzügyi teljesítmény (jövedelem) kapcsolatára, valamint bemutatja és értékeli a számviteli méréssel kapcsolatos főbb kritikákat. ____ One of the central problems of accounting theory and accounting regulation is accounting valuation, accounting as a value assignment aspect of the representation of economic phenomena. The first part of the article, setting out from the general concept of measurement, introduces the concepts of measurement and valuation as applied in accounting, describing their interconnections and basic characteristics. Following this, based on the international (IFRS) and Hungarian regulations, the paper sketches the current valuation framework used in financial reporting. The third part of the article analyses the theoretical background of the effective regulation, while also covering the connection of accounting measurement and financial performance (income), and finally it presents and evaluates the main elements of criticism concerning measurement in accounting.
Resumo:
Mai világunkban egyre több olyan erőforrást élünk fel, amelyek hatását az otthonunknak számító Föld egyszerűen már nem képes helyreállítani. Ebben számos jelenség mellett a gazdaság globalizációja, az élesedő versenyhelyzet, a fogyasztói társadalom további térnyerése, ebből adódóan pedig a logisztikai folyamatok intenzitásának növekedése kulcsszerepet játszik. A logisztikát érő kritikáknak ösztönözniük kell a vállalatok szakembereit arra, hogy változtassanak ezen. Ehhez elengedhetetlen a jelenlegi működés szénlábnyomának mérése. Csak a jelenállapot felmérése szolgálhat alapjául a fejlesztéseknek. A szerzők tanulmányának célja a szénlábnyomszámítás egy gyakorlati alkalmazásának ismertetése. Esettanulmány jelleggel bemutatják egy nagy nemzetközi vállalat hazai leányvállalatának a szénlábnyom-számítása során alkalmazott módszertanát. A számítások során a vállalat disztribúciós logisztikai folyamataira fókuszálnak, kiemelten vizsgálták a közúti szállítás és a raktározás széndioxid-kibocsátását. Számításaikban igyekeztek pontosak lenni, a hazai energiamixre számolt legfrissebb konverziós faktorokkal számoltak. Meggyőződésük, hogy az ilyen esettanulmányok hasznosak, hiszen a bemutatott módszertan mintául, útmutatásul szolgálhat további vállalatok számára. Reményeik szerint ezzel segíthetik, hogy minél több hazai vállalat kezdje el széndioxid-kibocsátásának szisztematikus és tudományos alapokon nyugvó mérését. ____ Due to globalization, intense competition and the consumer society logistics processes have been intensified during the last decades. This led to increased environmental strain generating intense criticism towards logistics profession. In order to decrease the environmental burden of logistics several professionals and companies have tried to make progress in this field and introduced techniques that are capable to measure the Carbon Footprint of logistics. Still public case studies are very limited. The paper presents the case of the Hungarian subsidiary of a big multinational FMCG firm. Calculations are built on the actual conversion factor developed for the Hungarian energy mix. A complex set of key performance indi actors usable to capture key characteristics of the present situation is presented. Not only the constructs of these KPIs are described in the paper but a detailed description of methodology used to calculate them is also given. The authors hope such detailed case study description will help other companies as well to initiate sustainable logistics programs.
Resumo:
The present dissertation consists of two studies that combine personnel selection, safety performance, and job performance literatures to answer an important question: are safe workers better workers? Study 1 tested a predictive model of safety performance to examine personality characteristics (conscientiousness and agreeableness), and two novel behavioral constructs (safety orientation and safety judgment) as predictors of safety performance in a sample of forklift loaders/operators (N = 307). Analyses centered on investigating safety orientation as a proximal predictor and determinant of safety performance. Study 2 replicated Study 1 and explored the relationship between safety performance and job performance by testing an integrative model in a sample of machine operators and construction crewmembers (N = 323). Both Study 1 and Study 2 found conscientiousness, agreeableness, and safety orientation to be good predictors of safety performance. While both personality and safety orientation were positively related to safety performance, safety orientation proved to be a more proximal determinant of safety performance. Across studies, results surrounding safety judgment as a predictor of safety performance were inconclusive, suggesting possible issues with measurement of the construct. Study 2 found a strong relationship between safety performance and job performance. In addition, safety performance served as a mediator between predictors (conscientiousness, agreeableness and safety orientation) and job performance. Together these findings suggest that safe workers are indeed better workers, challenging previous viewpoints to the contrary. Further, results implicate the viability of personnel selection as means of promoting safety in organizations.^
Resumo:
The gasotransmitter hydrogen sulfide (H2S) is known as an important regulator in several physiological and pathological responses. Among the challenges facing the field is the accurate and reliable measurement of hydrogen sulfide bioavailability. We have reported an approach to discretely measure sulfide and sulfide pools using the monobromobimane (MBB) method coupled with reversed phase high-performance liquid chromatography (RP-HPLC). The method involves the derivatization of sulfide with excess MBB under precise reaction conditions at room temperature to form sulfide dibimane (SDB). The resultant fluorescent SDB is analyzed by RP-HPLC using fluorescence detection with the limit of detection for SDB (2 nM). Care must be taken to avoid conditions that may confound H2S measurement with this method. Overall, RP-HPLC with fluorescence detection of SDB is a useful and powerful tool to measure biological sulfide levels.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.
The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.
This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.
Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.
The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.
Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.
Resumo:
Research into the dynamicity of job performance criteria has found evidence suggesting the presence of rank-order changes to job performance scores across time as well as intraindividual trajectories in job performance scores across time. These findings have influenced a large body of research into (a) the dynamicity of validities of individual differences predictors of job performance and (b) the relationship between individual differences predictors of job performance and intraindividual trajectories of job performance. In the present dissertation, I addressed these issues within the context of the Five Factor Model of personality. The Five Factor Model is arranged hierarchically, with five broad higher-order factors subsuming a number of more narrowly tailored personality facets. Research has debated the relative merits of broad versus narrow traits for predicting job performance, but the entire body of research has addressed the issue from a static perspective -- by examining the relative magnitude of validities of global factors versus their facets. While research along these lines has been enlightening, theoretical perspectives suggest that the validities of global factors versus their facets may differ in their stability across time. Thus, research is needed to not only compare the relative magnitude of validities of global factors versus their facets at a single point in time, but also to compare the relative stability of validities of global factors versus their facets across time. Also necessary to advance cumulative knowledge concerning intraindividual performance trajectories is research into broad vs. narrow traits for predicting such trajectories. In the present dissertation, I addressed these issues using a four-year longitudinal design. The results indicated that the validities of global conscientiousness were stable across time, while the validities of conscientiousness facets were more likely to fluctuate. However, the validities of emotional stability and extraversion facets were no more likely to fluctuate across time than those of the factors. Finally, while some personality factors and facets predicted performance intercepts (i.e., performance at the first measurement occasion), my results failed to indicate a significant effect of any personality variable on performance growth. Implications for research and practice are discussed.