799 resultados para Utility-based performance measures


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La creación de conocimiento al interior de las organizaciones es visible mediante la dirección adecuada del conocimiento de los individuos, sin embargo, cada individuo debe interactuar de tal manera que forme una red o sistema de conocimiento organizacional que consolide a largo plazo las empresas en el entorno en el que se desenvuelven. Este documento revisa elementos centrales acerca de la gestión de conocimiento visto desde varios autores y perspectivas e identifica puntos clave para diseñar un modelo de gestión de conocimiento para una empresa del sector de insumos químicos para la industria farmacéutica, cosmética y de alimentos de la ciudad de Bogotá.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El concepto de efectividad en Redes Inter-organizacionales se ha investigado poco a pesar de la gran importancia en el desarrollo y sostenibilidad de la red. Es muy importante entender este concepto ya que cuando hablamos de Red, nos referimos a un grupo de más de tres organizaciones que trabajan juntas para alcanzar un objetivo colectivo que beneficia a cada miembro de la red. Esto nos demuestra la importancia de evaluar y analizar este fenómeno “Red Inter-organizacional” de forma más detallada para poder analizar que estructura, formas de gobierno, relaciones entre los miembros y entre otros factores, influyen en la efectividad y perdurabilidad de la Red Inter-organizacional. Esta investigación se desarrolla con el fin de plantear una aproximación al concepto de medición de la efectividad en Redes Inter-organizacionales. El trabajo se centrara en la recopilación de información y en la investigación documental, la cual se realizará por fases para brindarle al lector una mayor claridad y entendimiento sobre qué es Red, Red Inter-Organizacional, Efectividad. Y para finalizar se estudiara Efectividad en una Red Inter-organizacional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Successful knowledge transfer is an important process which requires continuous improvement in today’s knowledge-intensive economy. However, improving knowledge transfer processes represents a challenge for construction practitioners due to the complexity of knowledge acquisition, codification and sharing. Although knowledge transfer is context based, understanding the critical success factors can lead to improvements in the transfer process. This paper seeks to identify and evaluate the most significant critical factors for improving knowledge transfer processes in Public Private Partnerships/Private Finance Initiatives (PPP/PFI) projects. Drawing upon a questionnaire survey of 52 construction firms located in the UK, data is analysed using Severity Index (SI) and Coefficient of Variation (COV), to examine and identify these factors in PPP/PFI schemes. The findings suggest that a supportive leadership, participation/commitment from the relevant parties, and good communication between the relevant parties are crucial to improving knowledge transfer processes in PFI schemes. Practitioners, managers and researchers can use the findings to efficiently design performance measures for analysing and improving knowledge transfer processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study proposes a utility-based framework for the determination of optimal hedge ratios (OHRs) that can allow for the impact of higher moments on hedging decisions. We examine the entire hyperbolic absolute risk aversion family of utilities which include quadratic, logarithmic, power, and exponential utility functions. We find that for both moderate and large spot (commodity) exposures, the performance of out-of-sample hedges constructed allowing for nonzero higher moments is better than the performance of the simpler OLS hedge ratio. The picture is, however, not uniform throughout our seven spot commodities as there is one instance (cotton) for which the modeling of higher moments decreases welfare out-of-sample relative to the simpler OLS. We support our empirical findings by a theoretical analysis of optimal hedging decisions and we uncover a novel link between OHRs and the minimax hedge ratio, that is the ratio which minimizes the largest loss of the hedged position. © 2011 Wiley Periodicals, Inc. Jrl Fut Mark

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) are often comorbid and share behavioural-cognitive abnormalities in sustained attention. A key question is whether this shared cognitive phenotype is based on common or different underlying pathophysiologies. To elucidate this question, we compared 20 boys with ADHD to 20 age and IQ matched ASD and 20 healthy boys using functional magnetic resonance imaging (fMRI) during a parametrically modulated vigilance task with a progressively increasing load of sustained attention. ADHD and ASD boys had significantly reduced activation relative to controls in bilateral striato–thalamic regions, left dorsolateral prefrontal cortex (DLPFC) and superior parietal cortex. Both groups also displayed significantly increased precuneus activation relative to controls. Precuneus was negatively correlated with the DLPFC activation, and progressively more deactivated with increasing attention load in controls, but not patients, suggesting problems with deactivation of a task-related default mode network in both disorders. However, left DLPFC underactivation was significantly more pronounced in ADHD relative to ASD boys, which furthermore was associated with sustained performance measures that were only impaired in ADHD patients. ASD boys, on the other hand, had disorder-specific enhanced cerebellar activation relative to both ADHD and control boys, presumably reflecting compensation. The findings show that ADHD and ASD boys have both shared and disorder-specific abnormalities in brain function during sustained attention. Shared deficits were in fronto–striato–parietal activation and default mode suppression. Differences were a more severe DLPFC dysfunction in ADHD and a disorder-specific fronto–striato–cerebellar dysregulation in ASD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this study is to examine the relationship between business-level strategy and organisational performance and to test the applicability of Porter's generic strategies in explaining differences in the performance of organisations. Design/methodology/approach – The study was focussed on manufacturing firms in the UK belonging to the electrical and mechanical engineering sectors. Data were collected through a postal survey using the survey instrument from 124 organisations and the respondents were all at CEO level. Both objective and subjective measures were used to assess performance. Non-response bias was assessed statistically and it was not found to be a major problem affecting this study. Appropriate measures were taken to ensure that common method variance (CMV) does not affect the results of this study. Statistical tests indicated that CMV problem does not affect the results of this study. Findings – The results of this study indicate that firms adopting one of the strategies, namely cost-leadership or differentiation, perform better than “stuck-in-the-middle” firms which do not have a dominant strategic orientation. The integrated strategy group has lower performance compared with cost-leaders and differentiators in terms of financial performance measures. This provides support for Porter's view that combination strategies are unlikely to be effective in organisations. However, the cost-leadership and differentiation strategies were not strongly correlated with the financial performance measures indicating the limitations of Porter's generic strategies in explaining performance heterogeneity in organisations. Originality/value – This study makes an important contribution to the literature by identifying some of the gaps in the literature through a systematic literature review and addressing those gaps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent growth in brain-computer interface (BCI) research has increased pressure to report improved performance. However, different research groups report performance in different ways. Hence, it is essential that evaluation procedures are valid and reported in sufficient detail. In this chapter we give an overview of available performance measures such as classification accuracy, cohen’s kappa, information transfer rate (ITR), and written symbol rate. We show how to distinguish results from chance level using confidence intervals for accuracy or kappa. Furthermore, we point out common pitfalls when moving from offline to online analysis and provide a guide on how to conduct statistical tests on (BCI) results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this article is to present a new method to predict the response variable of an observation in a new cluster for a multilevel logistic regression. The central idea is based on the empirical best estimator for the random effect. Two estimation methods for multilevel model are compared: penalized quasi-likelihood and Gauss-Hermite quadrature. The performance measures for the prediction of the probability for a new cluster observation of the multilevel logistic model in comparison with the usual logistic model are examined through simulations and an application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este estudo avalia a influência do desenvolvimento tecnológico sobre o social, a partir das atividades produtivas desenvolvidas na unidade agroindustrial Recanto lI, no município de Catolé do Rocha, com o apoio da Incubadora Tecnológica de Campina Grande - ITCG, e o Programa de Estudos e Ações no Semi-Árido Paraibano da Universidade Federal de Campina Grande - PEASAlUFCG. Os problemas sociais no semi-árido surgiram por condições ambientais adversas e se intensificaram pela falta de políticas públicas adequadas à educação e ao desenvolvimento da região. O problema é expresso pela seguinte indagação: Qual a influência entre o desenvolvimento tecnológico e o desenvolvimento social gerados a partir da atividade produtiva da Incubadora de Campina Grande, em Campina Grande, Paraíba? A relevância deste trabalho vem da pretensão em colaborar para o estudo da relação entre desenvolvimento tecnológico e social, numa tentativa de tornar inteligível a necessidade de educação e inovação tecnológica para o desenvolvimento social, principalmente em países não desenvolvidos. Vem também da utilidade para a formulação de políticas públicas e estratégias de desenvolvimento tecnológico, identificando ações e medidas de desempenho, voltadas ao diagnóstico da contribuição das incubadoras tecnológicas para o bem-estar da sociedade. O referencial teórico foi composto pela análise da função das racionalidades dos diversos agentes sociais e de sua contribuição para as interações sociais, além da análise das realidades culturais da sociedade empresariada, do Estado e das incubadoras tecnológicas. A metodologia se utilizou da pesquisa descritiva e explicativa, de campo, documental, bibliográfica e estudo de caso. O universo foram as pessoas que compõem a ITCG e a amostra foram o diretor da Incubadora, o cientista da UFCG e o líder comunitário de Recanto lI. Os sujeitos da pesquisa confundem-se com a amostra. A coleta de dados se deu por meio da literatura sobre a temática e de inserções no campo de estudo. O tratamento de dados se utilizou da análise qualitativa do fenômeno. As principais limitações do método consistiram na coleta, no tratamento dos dados, e na utilização do estudo de caso, com possibilidade de compreensão tendenciosa do fenômeno estudado. O referencial prático apresenta o resultados da pesquisa, avaliando-os por meio da comparação com o referencial teórico estudado. Tem-se como resultados que a ITCG agiu articulando os grupos e traduzindo as realidades culturais de cada agente em prol do desenvolvimento de comunidades agroindustriais em situação de risco social, devido à problemática da seca. A ITCG atuou como alfabetizadora social ensinando aos agentes sociais os códigos culturais dos demais grupos. As desigualdades que existiam entre o cientista e o sertanejo, aprofundadas pelo vexatório histórico de escravidão do homem do campo no nordeste brasileiro, foram minimizadas pela educação dada pelo próprio cientista. A Universidade contribuiu para a redução do analfabetismo ambiental, onde o meio ambiente era tido como o principal rival do homem do sertão. O Estado, quando atua em parceria com os demais agentes sociais consegue, manter-se como o principal instrumento dos cidadãos para controlar a globalização em função de seus valores e interesses. Na ITCG o ambiente de incubação é educacional, onde os novos empreendedores são postos em contato com a realidade cultural do empresariado, do Estado e de outras comunidades da sociedade, seguindo modelos meramente educacionais. Como conclusão tem-se que a trajetória dos cidadãos de Recanto II - de escravos a exportadores - foi baseada em dois grandes tesouros, a família e areligião. O mérito da ITCG foi compreender, respeitar e preservar a realidade cultural de Recanto 11 e incentivar a interação social com outras realidades culturais capazes de gerar desenvolvimento naquela região. A saúde, o desenvolvimento infantil, a segurança, a habitação foram conquistas de Recanto 11. Como sugestão tem-se que as políticas públicas deveriam reproduzir este ambiente de desenvolvimento para todos os cidadãos brasileiros em situação de risco social, utilizando-se da família como base para interações sociais fluidas e harmônicas entre os diversos agentes sociais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research evaluated the quality of the management of Brazilian stock funds on the period from January 1997 to October 2006. The analysis was based on the Modern Portfolio Theory measures of performance. In addition, this research evaluated the relevance of the performance measures The sample with 21 funds was extracted from the 126 largest Brasilian stock options funds because they were the only with quotas on the whole period. The monthly mean rate of return and the following indexes were calculated: total return, mean monthly return, Jensen Index, Treynor Index, Sharpe Index, Sortino Index, Market Timing and the Mean Quadratic Error. The initial analysis showed that the funds in the sample had different objectives and limitations. To make valuable comparisons, the ANBID (National Association of Investment Banks) categories were used to classify the funds. The measured results were ranked. The positions of the funds on the rankings based on the mean monthly return and the indexes of Jensen, Treynor, Sortino and Sharpe were similar. All of the ten ACTIVE funds of this research were above the benchmark (IBOVESPA index) in the measures above. Based on the CAPM, the managers of these funds got superior performance because they might have compiled the available information in a superior way. The six funds belonging to the ANBID classification of INDEXED got the first six positions in the ranking based on the Mean Quadratic Error. None of the researched funds have shown market timing skills to move the beta of their portfolios in the right direction to take the benefit of the market movements, at the significance level of 5%.