975 resultados para performance metrics
Resumo:
Annual company reports rarely distinguish between domestic and export market performance and even more rarely provide information about annual indicators of a specific export venture's performance. In this study, the authors develop and test a new measure for assessing the annual performance of an export venture (the APEV scale). The new measure comprises five dimensions: (1) annual export venture financial performance, (2) annual export venture strategic performance, (3) annual export venture achievement, (4) contribution of the export venture to annual exporting operations, and (5) satisfaction with annual export venture overall performance. The authors use the APEV scale to generate a scorecard of performance in exporting (the PERFEX scorecard) to assess export performance at the corporate level while comparatively evaluating all export ventures of the firm. Both the scale and the scorecard could help disclose export venture performance and could be useful instruments for annual planning, management, monitoring, and improvement of exporting programs.
Resumo:
Purpose – The purpose of the present analysis is to show that HR systems are not always designed in ways that consider the well-being of employees. In particular, performance metric methods seem to be designed with organizational goals in mind while focusing less on what employees need and desire. Design/methodology/approach – A literature review and multiple case-study method was utilized. Findings – The analysis showed that performance metrics should be revaluated by executives and HR professionals if they seek to develop socially responsible organizational cultures which care about the well-being of employees. Originality/value – The paper exposes the fact that performance appraisal techniques can be rooted in methodologies that ignore or deemphasize the value of employee well-being. The analysis provides a context in which all HR practices can be questioned in relation to meeting the standards of a social justice agenda in the area of corporate social responsibility.
Resumo:
Today's motivation for autonomous systems research stems out of the fact that networked environments have reached a level of complexity and heterogeneity that make their control and management by solely human administrators more and more difficult. The optimisation of performance metrics for the air traffic management system, like in other networked system, has become more complex with increasing number of flights, capacity constraints, environmental factors and safety regulations. It is anticipated that a new structure of planning layers and the introduction of higher levels of automation will reduce complexity and will optimise the performance metrics of the air traffic management system. This paper discusses the complexity of optimising air traffic management performance metrics and proposes a way forward based on higher levels of automation.
Resumo:
The objective of this paper is to provide performance metrics for small-signal stability assessment of a given system architecture. The stability margins are stated utilizing a concept of maximum peak criteria (MPC) derived from the behavior of an impedance-based sensitivity function. For each minor-loop gain defined at every system interface, a single number to state the robustness of stability is provided based on the computed maximum value of the corresponding sensitivity function. In order to compare various power-architecture solutions in terms of stability, a parameter providing an overall measure of the whole system stability is required. The selected figure of merit is geometric average of each maximum peak value within the system. It provides a meaningful metrics for system comparisons: the best system in terms of robust stability is the one that minimizes this index. In addition, the largest peak value within the system interfaces is given thus detecting the weakest point of the system in terms of robustness.
Resumo:
Many restaurant organizations have committed a substantial amount of effort to studying the relationship between a firm’s performance and its effort to develop an effective human resources management reward-and-retention system. These studies have produced various metrics for determining the efficacy of restaurant management and human resources management systems. This paper explores the best metrics to use when calculating the overall unit performance of casual restaurant managers. These metrics were identified through an exploratory qualitative case study method that included interviews with executives and a Delphi study. Experts proposed several diverse metrics for measuring management value and performance. These factors seem to represent all stakeholders’interest.
Resumo:
Supported by IEEE 802.15.4 standardization activities, embedded networks have been gaining popularity in recent years. The focus of this paper is to quantify the behavior of key networking metrics of IEEE 802.15.4 beacon-enabled nodes under typical operating conditions, with the inclusion of packet retransmissions. We corrected and extended previous analyses by scrutinizing the assumptions on which the prevalent Markovian modeling is generally based. By means of a comparative study, we singled out which of the assumptions impact each of the performance metrics (throughput, delay, power consumption, collision probability, and packet-discard probability). In particular, we showed that - unlike what is usually assumed - the probability that a node senses the channel busy is not constant for all the stages of the backoff procedure and that these differences have a noticeable impact on backoff delay, packet-discard probability, and power consumption. Similarly, we showed that - again contrary to common assumption - the probability of obtaining transmission access to the channel depends on the number of nodes that is simultaneously sensing it. We evidenced that ignoring this dependence has a significant impact on the calculated values of throughput and collision probability. Circumventing these and other assumptions, we rigorously characterize, through a semianalytical approach, the key metrics in a beacon-enabled IEEE 802.15.4 system with retransmissions.
Resumo:
Monimutkaisen tietokonejärjestelmän suorituskykyoptimointi edellyttää järjestelmän ajonaikaisen käyttäytymisen ymmärtämistä. Ohjelmiston koon ja monimutkaisuuden kasvun myötä suorituskykyoptimointi tulee yhä tärkeämmäksi osaksi tuotekehitysprosessia. Tehokkaampien prosessorien käytön myötä myös energiankulutus ja lämmöntuotto ovat nousseet yhä suuremmiksi ongelmiksi, erityisesti pienissä, kannettavissa laitteissa. Lämpö- ja energiaongelmien rajoittamiseksi on kehitetty suorituskyvyn skaalausmenetelmiä, jotka edelleen lisäävät järjestelmän kompleksisuutta ja suorituskykyoptimoinnin tarvetta. Tässä työssä kehitettiin visualisointi- ja analysointityökalu ajonaikaisen käyttäytymisen ymmärtämisen helpottamiseksi. Lisäksi kehitettiin suorituskyvyn mitta, joka mahdollistaa erilaisten skaalausmenetelmien vertailun ja arvioimisen suoritusympäristöstä riippumatta, perustuen joko suoritustallenteen tai teoreettiseen analyysiin. Työkalu esittää ajonaikaisesti kerätyn tallenteen helposti ymmärrettävällä tavalla. Se näyttää mm. prosessit, prosessorikuorman, skaalausmenetelmien toiminnan sekä energiankulutuksen kolmiulotteista grafiikkaa käyttäen. Työkalu tuottaa myös käyttäjän valitsemasta osasta suorituskuvaa numeerista tietoa, joka sisältää useita oleellisia suorituskykyarvoja ja tilastotietoa. Työkalun sovellettavuutta tarkasteltiin todellisesta laitteesta saatua suoritustallennetta sekä suorituskyvyn skaalauksen simulointia analysoimalla. Skaalausmekanismin parametrien vaikutus simuloidun laitteen suorituskykyyn analysoitiin.
Resumo:
Tämä diplomityö tehtiin osana Componenta Cast Componentsin kolmivuotista toimitusketjujen kehitysprojektia. Työn tavoitteena oli kuvata tyypillinen yrityksen sisäinen toimitusketjuprosessi ja tehdä alustava suorituskykyanalyysi valimon ja konepajan väliseen logistiseen prosessiin liittyen. Tarkoituksena oli myös löytää kehityskohteita materiaali- ja tietovirtojen hallinnassa näiden tuotantoyksiköiden välillä. Logistiikkaan, toimitusketjujen hallintaan ja toimitusketjun suorituskyvyn mittaamiseen liittyvän kirjallisuustutkimuksen sekä käytännön perusteella valittiin sopivat analyysimenetelmät. Näitä menetelmiä hyödynnettiin tilaustoimitus – prosessin kuvaamisessa sekä suorituskyvyn analysoinnissa yrityksen sisäisessä toimitusketjussa. Luonnollisena jatkona kehitettiin ja pantiin käytäntöön toimitusketjua synkronoiva imutyyppinen tuotannon- ja materiaalinohjausmenetelmä. Diplomityöprojektin aikana kehitettiin myös apuvälineet käyttöönotetun menetelmän asianmukaista hyödyntämistä varten. Diplomityöprojektissa otettiin ensimmäiset askeleet kohti integroitua sisäistä toimitusketjua. Uuden tuotannon- ja materiaalinohjausmenetelmän standardisointi muihin menetelmiin yhdistettynä, sekä toimitusketjun avainmittarien jatkokehitys on jo alkanut. Läpimenoaikoja lyhentämällä ja synkronoidun, läpinäkyvän kysyntä-tarjontaketjun avulla integroitumisen astetta voidaan nostaa edelleen. Poikkiorganisatorinen kehitys ja johtaminen toimitusketjussa on avainedellytys menestykseen.
Resumo:
Diplomityö tarkastelee säikeistettyä ohjelmointia rinnakkaisohjelmoinnin ylemmällä hierarkiatasolla tarkastellen erityisesti hypersäikeistysteknologiaa. Työssä tarkastellaan hypersäikeistyksen hyviä ja huonoja puolia sekä sen vaikutuksia rinnakkaisalgoritmeihin. Työn tavoitteena oli ymmärtää Intel Pentium 4 prosessorin hypersäikeistyksen toteutus ja mahdollistaa sen hyödyntäminen, missä se tuo suorituskyvyllistä etua. Työssä kerättiin ja analysoitiin suorituskykytietoa ajamalla suuri joukko suorituskykytestejä eri olosuhteissa (muistin käsittely, kääntäjän asetukset, ympäristömuuttujat...). Työssä tarkasteltiin kahdentyyppisiä algoritmeja: matriisioperaatioita ja lajittelua. Näissä sovelluksissa on säännöllinen muistinkäyttökuvio, mikä on kaksiteräinen miekka. Se on etu aritmeettis-loogisissa prosessoinnissa, mutta toisaalta huonontaa muistin suorituskykyä. Syynä siihen on nykyaikaisten prosessorien erittäin hyvä raaka suorituskyky säännöllistä dataa käsiteltäessä, mutta muistiarkkitehtuuria rajoittaa välimuistien koko ja useat puskurit. Kun ongelman koko ylittää tietyn rajan, todellinen suorituskyky voi pudota murto-osaan huippusuorituskyvystä.
Resumo:
Tutkimuksen tavoite oli selvittää suorituskyvyn mittaamista, mittareita ja niiden suunnittelua tukku- ja jakeluliiketoiminnassa. Kriittisten menestystekijöiden mittarit auttavat yritystä kohti yhteistä päämäärää. Kriittisten menestystekijöiden mittarit ovat usein yhdistetty strategiseen suunnitteluun ja implementointiin ja niillä on yhtäläisyyksiä monien strategisten työkalujen kun Balanced scorecardin kanssa. Tutkimus ongelma voidaan esittää kysymyksen muodossa. •Mitkä ovat Oriola KD:n pitkänaikavälin tavoitteita tukevat kriittisten menestystekijöiden mittarit (KPIs) toimittajan ja tuotevalikoiman mittaamisessa? Tutkimus on jaettu kirjalliseen ja empiiriseen osaan. Kirjallisuus katsaus käsittelee aikaisempaa tutkimusta strategian, toimitusketjun hallinnan, toimittajan arvioinnin ja erilaisten suorituskyvyn mittaamisjärjestelmien osalta. Empiirinen osuus etenee nykytila-analyysista ehdotettuihin kriittisten menestystekijöiden mittareihin, jotka ovat kehitetty kirjallisuudesta löydetyn mallin avulla. Tutkimuksen lopputuloksena ovat case yrityksen tarpeisiin kehitetyt kriittisten menestystekijöiden mittarit toimittajan ja tuotevalikoiman arvioinnissa.
Resumo:
The thesis examines the performance persistence of hedge funds using complement methodologies (namely cross-sectional regressions, quantile portfolio analysis and Spearman rank correlation test). In addition, six performance ranking metrics and six different combinations of selection and holding periods are compared. The data is gathered from HFI and Tremont databases covering over 14,000 hedge funds and time horizon is set from January 1996 to December 2007. The results suggest that there definitely exists performance persistence among hedge funds and the strength and existence of persistence vary among fund styles. The persistence depends on the metrics and combination of selection and prediction period applied. According to the results, the combination of 36-month selection and holding period outperforms other five period combinations in capturing performance persistence within the sample. Furthermore, model-free performance metrics capture persistence more sensitively than model-specific metrics. The study is the first one ever to use MVR as a performance ranking metric, and surprisingly MVR is more sensitive to detect persistence than other performance metrics employed.
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
Strategic control is defined as the use of qualitative and quantitative tools for the evaluation of strategic organizational performance. Most research in strategic planning has focused on strategy formulation and implementation, but little work has been done on strategic performance evaluation particularly in the area of cancer research. The objective of this study was to identify strategic control approaches and financial performance metrics used by major cancer centers in the country as an initial step in expanding the theory and practice behind strategic organizational performance. Focusing on hospitals which share similar mandate and resource constraints was expected to improve measurement precision. The results indicate that most cancer centers use a wide selection of evaluation tools, but sophisticated analytical approaches were less common. In addition, there was evidence that high-performing centers tend to invest a larger degree of resources in the area of strategic performance analysis than centers showing lower financial results. The conclusions point to the need for incorporating higher degree of analytical power in order to improve the tracking of strategic performance. This study is one of the first to concentrate in the area of strategic control.^
Resumo:
CHARACTERIZATION OF THE COUNT RATE PERFORMANCE AND EVALUATION OF THE EFFECTS OF HIGH COUNT RATES ON MODERN GAMMA CAMERAS Michael Stephen Silosky, B.S. Supervisory Professor: S. Cheenu Kappadath, Ph.D. Evaluation of count rate performance (CRP) is an integral component of gamma camera quality assurance and measurement of system dead time (τ) is important for quantitative SPECT. The CRP of three modern gamma cameras was characterized using established methods (Decay and Dual Source) under a variety of experimental conditions. For the Decay method, input count rate was plotted against observed count rate and fit to the paralyzable detector model (PDM) to estimate τ (Rates method). A novel expression for observed counts as a function of measurement time interval was derived and the observed counts were fit to this expression to estimate τ (Counts method). Correlation and Bland-Altman analysis were performed to assess agreement in estimates of τ between methods. The dependencies of τ on energy window definition and incident energy spectrum were characterized. The Dual Source method was also used to estimate τ and its agreement with the Decay method under identical conditions and the effects of total activity and the ratio of source activities were investigated. Additionally, the effects of count rate on several performance metrics were evaluated. The CRP curves for each system agreed with the PDM at low count rates but deviated substantially at high count rates. Estimates of τ for the paralyzable portion of the CRP curves using the Rates and Counts methods were highly correlated (r=0.999) but with a small (~6%) difference. No significant difference was observed between the highly correlated estimates of τ using the Decay or Dual Source methods under identical experimental conditions (r=0.996). Estimates of τ increased as a power-law function with decreasing ratio of counts in the photopeak to the total counts and linearly with decreasing spectral effective energy. Dual Source method estimates of τ varied as a quadratic with the ratio of the single source to combined source activities and linearly with total activity used across a large range. Image uniformity, spatial resolution, and energy resolution degraded linearly with count rate and image distorting effects were observed. Guidelines for CRP testing and a possible method for the correction of count rate losses for clinical images have been proposed.
Resumo:
In this paper, a computer-based tool is developed to analyze student performance along a given curriculum. The proposed software makes use of historical data to compute passing/failing probabilities and simulates future student academic performance based on stochastic programming methods (MonteCarlo) according to the specific university regulations. This allows to compute the academic performance rates for the specific subjects of the curriculum for each semester, as well as the overall rates (the set of subjects in the semester), which are the efficiency rate and the success rate. Additionally, we compute the rates for the Bachelors degree, which are the graduation rate measured as the percentage of students who finish as scheduled or taking an extra year and the efficiency rate (measured as the percentage of credits of the curriculum with respect to the credits really taken). In Spain, these metrics have been defined by the National Quality Evaluation and Accreditation Agency (ANECA). Moreover, the sensitivity of the performance metrics to some of the parameters of the simulator is analyzed using statistical tools (Design of Experiments). The simulator has been adapted to the curriculum characteristics of the Bachelor in Engineering Technologies at the Technical University of Madrid(UPM).