957 resultados para Benchmarking (Administração)
Resumo:
O objetivo desta dissertação procurou perceber como as políticas culturais do Governo Regional dos Açores, através do investimento realizado em equipamentos culturais, influenciou a democratização do acesso à cultura na região, no período entre 1976 e 2008. Com vista a esse objetivo, foram analisados os programas de governos aprovados, seguido do levantamento de dados financeiros relativos a despesa e investimento no setor cultural na região. Foram ainda recolhidos os dados existentes sobre os visitantes dos equipamentos escolhidos, mais especificamente, os museus sob a tutela da administração pública regional, o objeto para este estudo de caso. Após a organização de todos esses dados, foram analisados e construídas hipóteses de comparação entre eles, de modo a resumir a evolução e tendência desses valores. Com vista a perceber os desenvolvimentos no período analisado, procedeu-se à recolha mais completa possível de toda a legislação criada para o setor a nível regional, tornando possível analisar essa consolidação. Após a análise de todos os dados recolhidos e trabalhados, verifica-se que a promoção de medidas com vista a uma maior democratização cultural nos Açores passa por vários fatores: um forte investimento financeiro nos equipamentos em questão (como obras e apetrechamento técnico); o desenvolvimento de legislação estruturante; uma postura de descentralização cultural; contratação de pessoal especializado e formação do pessoal existente; e a criação de uma rede regional de museus. Todas essas ações demonstram o trabalho da administração regional, através da implementação de políticas culturais, com vista a uma maior democratização do acesso à cultura por parte das populações.
Resumo:
This paper is an initial work towards developing an e-Government benchmarking model that is user-centric. To achieve the goal then, public service delivery is discussed first including the transition to online public service delivery and the need for providing public services using electronic media. Two major e-Government benchmarking methods are critically discussed and the need to develop a standardized benchmarking model that is user-centric is presented. To properly articulate user requirements in service provision, an organizational semiotic method is suggested.
Resumo:
This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally.
Resumo:
Routine milk recording data, often covering many years, are available for approximately half the dairy herds of England and Wales. In addition to milk yield and quality, these data include production events that can be used to derive objective Key Performance Indicators (KPI) describing a herd's fertility and production. Recent developments in information systems give veterinarians and other technical advisers access to these KPIs on-line. In addition to reviewing individual herd performance, advisers can establish local benchmark groups to demonstrate the relative performance of similar herds in the vicinity. The use of existing milk recording data places no additional demands on farmer's time or resources. These developments could also readily be exploited by universities to introduce veterinary undergraduates to the realities of commercial dairy production.
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Resumo:
Supplier selection has a great impact on supply chain management. The quality of supplier selection also affects profitability of organisations which work in the supply chain. As suppliers can provide variety of services and customers demand higher quality of service provision, the organisation is facing challenges for making the right choice of supplier for the right needs. The existing methods for supplier selection, such as data envelopment analysis (DEA) and analytical hierarchy process (AHP) can automatically perform selection of competitive suppliers and further decide winning supplier(s). However, these methods are not capable of determining the right selection criteria which should be derived from the business strategy. An ontology model described in this paper integrates the strengths of DEA and AHP with new mechanisms which ensure the right supplier to be selected by the right criteria for the right customer's needs.
Resumo:
Purpose – The paper addresses the practical problems which emerge when attempting to apply longitudinal approaches to the assessment of property depreciation using valuation-based data. These problems relate to inconsistent valuation regimes and the difficulties in finding appropriate benchmarks. Design/methodology/approach – The paper adopts a case study of seven major office locations around Europe and attempts to determine ten-year rental value depreciation rates based on a longitudinal approach using IPD, CBRE and BNP Paribas datasets. Findings – The depreciation rates range from a 5 per cent PA depreciation rate in Frankfurt to a 2 per cent appreciation rate in Stockholm. The results are discussed in the context of the difficulties in applying this method with inconsistent data. Research limitations/implications – The paper has methodological implications for measuring property investment depreciation and provides an example of the problems in adopting theoretically sound approaches with inconsistent information. Practical implications – Valuations play an important role in performance measurement and cross border investment decision making and, therefore, knowledge of inconsistency of valuation practice aids decision making and informs any application of valuation-based data in the attainment of depreciation rates. Originality/value – The paper provides new insights into the use of property market valuation data in a cross-border context, insights that previously had been anecdotal and unproven in nature.
Resumo:
Herd Companion uses routine milk‐recording records to generate twelve‐month rolling averages that indicate performance trends. This article looks at Herd Somatic Cell Count (SCC) and four other SCC‐related parameters from 252 National Milk Records (NMR) recorded herds to assess how each parameter correlates with the Herd SCC. The analysis provides evidence for the importance of targeting individual cows with high SCC recordings (>200,000 cells/ml and >500,000 cells/ml) and/or individual cows with repeatedly high SCC recordings (chronic high SCC) and/or cows that begin lactation with a high SCC recording (dry period infection) in order to achieve bulk milk Herd SCC below 200,000 cells/ml.
Resumo:
If secondary structure predictions are to be incorporated into fold recognition methods, an assessment of the effect of specific types of errors in predicted secondary structures on the sensitivity of fold recognition should be carried out. Here, we present a systematic comparison of different secondary structure prediction methods by measuring frequencies of specific types of error. We carry out an evaluation of the effect of specific types of error on secondary structure element alignment (SSEA), a baseline fold recognition method. The results of this evaluation indicate that missing out whole helix or strand elements, or predicting the wrong type of element, is more detrimental than predicting the wrong lengths of elements or overpredicting helix or strand. We also suggest that SSEA scoring is an effective method for assessing accuracy of secondary structure prediction and perhaps may also provide a more appropriate assessment of the “usefulness” and quality of predicted secondary structure, if secondary structure alignments are to be used in fold recognition.
Resumo:
Commercial kitchens often leave a large carbon footprint. A new dataset of energy performance metrics from a leading industrial partner is presented. Categorising these types of buildings is challenging. Electricity use has been analysed using data from automated meter readings (AMR) for the purpose of benchmarking and discussed in terms of factors such as size and food output. From the analysed results, consumption is found to be almost double previous sector estimates of 6480 million kWh per year. Recommendations are made to further improve the current benchmarks in order to attain robust, reliable and transparent figures, such as the introduction of normalised performance indicators to include kitchen size (m2) and kWh per thousand-pound turnover.
Resumo:
Commercial kitchens are one of the most profligate users of gas, water and electricity in the UK and can leave a large carbon footprint. It is estimated that the total energy consumption of Britain’s catering industry is in excess of 21,600 million kWh per year. In order to facilitate appropriate energy reduction within licensed restaurants, energy use must be translated into a form that can be compared between kitchens to enable operators to assess how they are improving and to allow rapid identification of facilities which require action. A review of relevant literature is presented and current benchmarking methods are discussed in order to assist in the development and categorisation of benchmarking energy reduction in commercial kitchens. Energy use within UK industry leading brands is discussed for the purpose of benchmarking in terms of factors such as size and output.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.