928 resultados para HEVC Performance Modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a strong drive towards hyperresolution earth system models in order to resolve finer scales of motion in the atmosphere. The problem of obtaining more realistic representation of terrestrial fluxes of heat and water, however, is not just a problem of moving to hyperresolution grid scales. It is much more a question of a lack of knowledge about the parameterisation of processes at whatever grid scale is being used for a wider modelling problem. Hyperresolution grid scales cannot alone solve the problem of this hyperresolution ignorance. This paper discusses these issues in more detail with specific reference to land surface parameterisations and flood inundation models. The importance of making local hyperresolution model predictions available for evaluation by local stakeholders is stressed. It is expected that this will be a major driving force for improving model performance in the future. Keith BEVEN, Hannah CLOKE, Florian PAPPENBERGER, Rob LAMB, Neil HUNTER

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the generally positive contribution of supply management capabilities to firm performance their respective routines require more depth of assessment. Using the resource-based view we examine four routines bundles comprising ostensive and performative aspects of supply management capability – supply management integration, coordinated sourcing, collaboration management and performance assessment. Using structural equation modelling we measure supply management capability empirically as a second-order latent variable and estimate its effect on a series of financial and operational performance measures. The routines-based approach allows us to demonstrate a different, more fine-grained approach for assessing consistent bundles of homogeneous patterns of activity across firms. The results suggest supply management capability is formed of internally consistent routine bundles, which are significantly related to financial performance, mediated by operational performance. Our results confirm an indirect effect of firm performance for ‘core’ routines forming the architecture of a supply management capability. Supply management capability primarily improves the operational performance of the business, which is subsequently translated into improved financial performance. The study is significant for practice as it offers a different view about the face-valid rationale of supply management directly influencing firm financial performance. We confound this assumption, prompting caution when placing too much importance on directly assessing supply management capability using financial performance of the business.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IntFOLD is an independent web server that integrates our leading methods for structure and function prediction. The server provides a simple unified interface that aims to make complex protein modelling data more accessible to life scientists. The server web interface is designed to be intuitive and integrates a complex set of quantitative data, so that 3D modelling results can be viewed on a single page and interpreted by non-expert modellers at a glance. The only required input to the server is an amino acid sequence for the target protein. Here we describe major performance and user interface updates to the server, which comprises an integrated pipeline of methods for: tertiary structure prediction, global and local 3D model quality assessment, disorder prediction, structural domain prediction, function prediction and modelling of protein-ligand interactions. The server has been independently validated during numerous CASP (Critical Assessment of Techniques for Protein Structure Prediction) experiments, as well as being continuously evaluated by the CAMEO (Continuous Automated Model Evaluation) project. The IntFOLD server is available at: http://www.reading.ac.uk/bioinf/IntFOLD/

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper uses a novel numerical optimization technique - robust optimization - that is well suited to solving the asset-liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximise the Sharpe ratio. We disaggregate pension liabilities into three components - active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes-Stein, and Black-Litterman models as well as the actual USS investment decisions. Over a 144 month out-of-sample period robust optimization is superior to the four benchmarks across 20 performance criteria, and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyses the influence of vegetation structure (i.e. leaf area index and canopy cover) and seasonal background changes on moderate-resolution imaging spectrometer (MODIS)-simulated reflectance data in open woodland. Approximately monthly spectral reflectance and transmittance field measurements (May 2011 to October 2013) of cork oak tree leaves (Quercus suber) and of the herbaceous understorey were recorded in the region of Ribatejo, Portugal. The geometric-optical and radiative transfer (GORT) model was used to simulate MODIS response (red, near-infrared) and to calculate vegetation indices, investigating their response to changes in the structure of the overstorey vegetation and to seasonal changes in the understorey using scenarios corresponding to contrasting phenological status (dry season vs. wet season). The performance of normalized difference vegetation index (NDVI), soil-adjusted vegetation index (SAVI), and enhanced vegetation index (EVI) is discussed. Results showed that SAVI and EVI were very sensitive to the emergence of background vegetation in the wet season compared to NDVI and that shading effects lead to an opposing trend in the vegetation indices. The information provided by this research can be useful to improve our understanding of the temporal dynamic of vegetation, monitored by vegetation indices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New models for estimating bioaccumulation of persistent organic pollutants in the agricultural food chain were developed using recent improvements to plant uptake and cattle transfer models. One model named AgriSim was based on K OW regressions of bioaccumulation in plants and cattle, while the other was a steady-state mechanistic model, AgriCom. The two developed models and European Union System for the Evaluation of Substances (EUSES), as a benchmark, were applied to four reported food chain (soil/air-grass-cow-milk) scenarios to evaluate the performance of each model simulation against the observed data. The four scenarios considered were as follows: (1) polluted soil and air, (2) polluted soil, (3) highly polluted soil surface and polluted subsurface and (4) polluted soil and air at different mountain elevations. AgriCom reproduced observed milk bioaccumulation well for all four scenarios, as did AgriSim for scenarios 1 and 2, but EUSES only did this for scenario 1. The main causes of the deviation for EUSES and AgriSim were the lack of the soil-air-plant pathway and the ambient air-plant pathway, respectively. Based on the results, it is recommended that soil-air-plant and ambient air-plant pathway should be calculated separately and the K OW regression of transfer factor to milk used in EUSES be avoided. AgriCom satisfied the recommendations that led to the low residual errors between the simulated and the observed bioaccumulation in agricultural food chain for the four scenarios considered. It is therefore recommended that this model should be incorporated into regulatory exposure assessment tools. The model uncertainty of the three models should be noted since the simulated concentration in milk from 5th to 95th percentile of the uncertainty analysis often varied over two orders of magnitude. Using a measured value of soil organic carbon content was effective to reduce this uncertainty by one order of magnitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Species` potential distribution modelling consists of building a representation of the fundamental ecological requirements of a species from biotic and abiotic conditions where the species is known to occur. Such models can be valuable tools to understand the biogeography of species and to support the prediction of its presence/absence considering a particular environment scenario. This paper investigates the use of different supervised machine learning techniques to model the potential distribution of 35 plant species from Latin America. Each technique was able to extract a different representation of the relations between the environmental conditions and the distribution profile of the species. The experimental results highlight the good performance of random trees classifiers, indicating this particular technique as a promising candidate for modelling species` potential distribution. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Credit scoring modelling comprises one of the leading formal tools for supporting the granting of credit. Its core objective consists of the generation of a score by means of which potential clients can be listed in the order of the probability of default. A critical factor is whether a credit scoring model is accurate enough in order to provide correct classification of the client as a good or bad payer. In this context the concept of bootstraping aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the fitted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper we propose a new bagging-type variant procedure, which we call poly-bagging, consisting of combining predictors over a succession of resamplings. The study is derived by credit scoring modelling. The proposed poly-bagging procedure was applied to some different artificial datasets and to a real granting of credit dataset up to three successions of resamplings. We observed better classification accuracy for the two-bagged and the three-bagged models for all considered setups. These results lead to a strong indication that the poly-bagging approach may promote improvement on the modelling performance measures, while keeping a flexible and straightforward bagging-type structure easy to implement. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study reported here is part of a large project for evaluation of the Thermo-Chemical Accumulator (TCA), a technology under development by the Swedish company ClimateWell AB. The studies concentrate on the use of the technology for comfort cooling. This report concentrates on measurements in the laboratory, modelling and system simulation. The TCA is a three-phase absorption heat pump that stores energy in the form of crystallised salt, in this case Lithium Chloride (LiCl) with water being the other substance. The process requires vacuum conditions as with standard absorption chillers using LiBr/water. Measurements were carried out in the laboratories at the Solar Energy Research Center SERC, at Högskolan Dalarna as well as at ClimateWell AB. The measurements at SERC were performed on a prototype version 7:1 and showed that this prototype had several problems resulting in poor and unreliable performance. The main results were that: there was significant corrosion leading to non-condensable gases that in turn caused very poor performance; unwanted crystallisation caused blockages as well as inconsistent behaviour; poor wetting of the heat exchangers resulted in relatively high temperature drops there. A measured thermal COP for cooling of 0.46 was found, which is significantly lower than the theoretical value. These findings resulted in a thorough redesign for the new prototype, called ClimateWell 10 (CW10), which was tested briefly by the authors at ClimateWell. The data collected here was not large, but enough to show that the machine worked consistently with no noticeable vacuum problems. It was also sufficient for identifying the main parameters in a simulation model developed for the TRNSYS simulation environment, but not enough to verify the model properly. This model was shown to be able to simulate the dynamic as well as static performance of the CW10, and was then used in a series of system simulations. A single system model was developed as the basis of the system simulations, consisting of a CW10 machine, 30 m2 flat plate solar collectors with backup boiler and an office with a design cooling load in Stockholm of 50 W/m2, resulting in a 7.5 kW design load for the 150 m2 floor area. Two base cases were defined based on this: one for Stockholm using a dry cooler with design cooling rate of 30 kW; one for Madrid with a cooling tower with design cooling rate of 34 kW. A number of parametric studies were performed based on these two base cases. These showed that the temperature lift is a limiting factor for cooling for higher ambient temperatures and for charging with fixed temperature source such as district heating. The simulated evacuated tube collector performs only marginally better than a good flat plate collector if considering the gross area, the margin being greater for larger solar fractions. For 30 m2 collector a solar faction of 49% and 67% were achieved for the Stockholm and Madrid base cases respectively. The average annual efficiency of the collector in Stockholm (12%) was much lower than that in Madrid (19%). The thermal COP was simulated to be approximately 0.70, but has not been possible to verify with measured data. The annual electrical COP was shown to be very dependent on the cooling load as a large proportion of electrical use is for components that are permanently on. For the cooling loads studied, the annual electrical COP ranged from 2.2 for a 2000 kWh cooling load to 18.0 for a 21000 kWh cooling load. There is however a potential to reduce the electricity consumption in the machine, which would improve these figures significantly. It was shown that a cooling tower is necessary for the Madrid climate, whereas a dry cooler is sufficient for Stockholm although a cooling tower does improve performance. The simulation study was very shallow and has shown a number of areas that are important to study in more depth. One such area is advanced control strategy, which is necessary to mitigate the weakness of the technology (low temperature lift for cooling) and to optimally use its strength (storage).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Performance in cross-country skiing is influenced by the skier’s ability to continuously produce propelling forces and force magnitude in relation to the net external forces. A surrogate indicator of the “power supply” in cross-country skiing would be a physiological variable that reflects an important performance-related capability, whereas the body mass itself is an indicator of the “power demand” experienced by the skier. To adequately evaluate an elite skier’s performance capability, it is essential to establish the optimal ratio between the physiological variable and body mass. The overall aim of this doctoral thesis was to investigate the importance of body-mass exponent optimization for the evaluation of performance capability in cross-country skiing. Methods In total, 83 elite cross-country skiers (56 men and 27 women) volunteered to participate in the four studies. The physiological variables of maximal oxygen uptake (V̇O2max) and oxygen uptake corresponding to a blood-lactate concentration of 4 mmol∙l-1 (V̇O2obla) were determined while treadmill roller skiing using the diagonal-stride technique; mean oxygen uptake (V̇O2dp) and upper-body power output (Ẇ) were determined during double-poling tests using a ski-ergometer. Competitive performance data for elite male skiers were collected from two 15-km classical-technique skiing competitions and a 1.25-km sprint prologue; additionally, a 2-km double-poling roller-skiing time trial using the double-poling technique was used as an indicator of upper-body performance capability among elite male and female junior skiers. Power-function modelling was used to explain the race and time-trial speeds based on the physiological variables and body mass. Results The optimal V̇O2max-to-mass ratios to explain 15-km race speed were V̇O2max divided by body mass raised to the 0.48 and 0.53 power, and these models explained 68% and 69% of the variance in mean skiing speed, respectively; moreover, the 95% confidence intervals (CI) for the body-mass exponents did not include either 0 or 1. For the modelling of race speed in the sprint prologue, body mass failed to contribute to the models based on V̇O2max, V̇O2obla, and V̇O2dp. The upper-body power output-to-body mass ratio that optimally explained time-trial speed was Ẇ ∙ m-0.57 and the model explained 63% of the variance in speed. Conclusions The results in this thesis suggest that V̇O2max divided by the square root of body mass should be used as an indicator of performance in 15-km classical-technique races among elite male skiers rather than the absolute or simple ratio-standard scaled expression. To optimally explain an elite male skier’s performance capability in sprint prologues, power-function models based on oxygen-uptake variables expressed absolutely are recommended. Moreover, to evaluate elite junior skiers’ performance capabilities in 2-km double-poling roller-skiing time trials, it is recommended that Ẇ divided by the square root of body mass should be used rather than absolute or simple ratio-standard scaled expression of power output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compare three frequently used volatility modelling techniques: GARCH, Markovian switching and cumulative daily volatility models. Our primary goal is to highlight a practical and systematic way to measure the relative effectiveness of these techniques. Evaluation comprises the analysis of the validity of the statistical requirements of the various models and their performance in simple options hedging strategies. The latter puts them to test in a "real life" application. Though there was not much difference between the three techniques, a tendency in favour of the cumulative daily volatility estimates, based on tick data, seems dear. As the improvement is not very big, the message for the practitioner - out of the restricted evidence of our experiment - is that he will probably not be losing much if working with the Markovian switching method. This highlights that, in terms of volatility estimation, no clear winner exists among the more sophisticated techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PEDRINI, Aldomar; WESTPHAL, F. S.; LAMBERT, R.. A methodology for building energy modelling and calibration in warm climates. Building And Environment, Australia, n. 37, p.903-912, 2002. Disponível em: . Acesso em: 04 out. 2010.