135 resultados para Performance model

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A spectral performance model, designed to simulate the system spectral throughput for each of the 21 channels in the HIRDLS radiometer, is described. This model uses the measured spectral characteristics of each of the components in the optical train, appropriately corrected for their optical environment, to determine the end-to-end spectral throughput profile for each channel. This profile is then combined with the predicted thermal emission from the atmosphere, arising from the height of interest, to establish an in-band (wanted) to out-of-band (unwanted) radiance ratio. The results from the use of the model demonstrate that the instrument level radiometric requirements for the instrument will be achieved. The optical arrangement and spectral design requirements for filtering in the HIRDLS instrument are described together with a presentation of the performance achieved for the complete set of manufactured filters. Compliance of the predicted passband throughput model to the spectral positioning requi rements of the instrument is also demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A low-temperature model is described for infrared multilayer filters containing PbTe (or other semiconductor) and ZnSe (or other II/VI). The model is based on dielectric dispersion with semiconductor carrier dispersion added. It predicts an improved performance on cooling such as would be useful to avoid erroneous signals from optics in spaceflight radiometers. Agreement with measurement is obtained over the initial temperature range 70-400K and wavelength range 2.5-20µm.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This study adopts the RBV of the firm in order to identify critical advantage-generating resources and capabilities with strong positive export strategy and performance implications. The proposed export performance model is tested using a structural equation modeling approach on a sample of 356 British exporters. We examine the individual as well as the concurrent (simultaneous) direct and indirect effects of five resource bundles on export performance. We find that four resources/capabilities: managerial, knowledge, planning, and technology, have a significant positive direct effect on export performance, while relational and physical resources exhibited no unique positive effect. We also find that the firm’s export strategy mediates the resource-performance nexus in the case of managerial and knowledge-based resources. The theoretical and methodological grounding of this study contributes to the advancement of export related research by providing better specification of the nature of the effects – direct or indirect – of particular resource factors on export performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The climatology of the OPA/ARPEGE-T21 coupled general circulation model (GCM) is presented. The atmosphere GCM has a T21 spectral truncation and the ocean GCM has a 2°×1.5° average resolution. A 50-year climatic simulation is performed using the OASIS coupler, without flux correction techniques. The mean state and seasonal cycle for the last 10 years of the experiment are described and compared to the corresponding uncoupled experiments and to climatology when available. The model reasonably simulates most of the basic features of the observed climate. Energy budgets and transports in the coupled system, of importance for climate studies, are assessed and prove to be within available estimates. After an adjustment phase of a few years, the model stabilizes around a mean state where the tropics are warm and resemble a permanent ENSO, the Southern Ocean warms and almost no sea-ice is left in the Southern Hemisphere. The atmospheric circulation becomes more zonal and symmetric with respect to the equator. Once those systematic errors are established, the model shows little secular drift, the small remaining trends being mainly associated to horizontal physics in the ocean GCM. The stability of the model is shown to be related to qualities already present in the uncoupled GCMs used, namely a balanced radiation budget at the top-of-the-atmosphere and a tight ocean thermocline.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Satellite observed data for flood events have been used to calibrate and validate flood inundation models, providing valuable information on the spatial extent of the flood. Improvements in the resolution of this satellite imagery have enabled indirect remote sensing of water levels by using an underlying LiDAR DEM to extract the water surface elevation at the flood margin. Further to comparison of the spatial extent, this now allows for direct comparison between modelled and observed water surface elevations. Using a 12.5m ERS-1 image of a flood event in 2006 on the River Dee, North Wales, UK, both of these data types are extracted and each assessed for their value in the calibration of flood inundation models. A LiDAR guided snake algorithm is used to extract an outline of the flood from the satellite image. From the extracted outline a binary grid of wet / dry cells is created at the same resolution as the model, using this the spatial extent of the modelled and observed flood can be compared using a measure of fit between the two binary patterns of flooding. Water heights are extracted using points at intervals of approximately 100m along the extracted outline, and the students T-test is used to compare modelled and observed water surface elevations. A LISFLOOD-FP model of the catchment is set up using LiDAR topographic data resampled to the 12.5m resolution of the satellite image, and calibration of the friction parameter in the model is undertaken using each of the two approaches. Comparison between the two approaches highlights the sensitivity of the spatial measure of fit to uncertainty in the observed data and the potential drawbacks of using the spatial extent when parts of the flood are contained by the topography.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Danish Eulerian Model (DEM) is a powerful air pollution model, designed to calculate the concentrations of various dangerous species over a large geographical region (e.g. Europe). It takes into account the main physical and chemical processes between these species, the actual meteorological conditions, emissions, etc.. This is a huge computational task and requires significant resources of storage and CPU time. Parallel computing is essential for the efficient practical use of the model. Some efficient parallel versions of the model were created over the past several years. A suitable parallel version of DEM by using the Message Passing Interface library (AIPI) was implemented on two powerful supercomputers of the EPCC - Edinburgh, available via the HPC-Europa programme for transnational access to research infrastructures in EC: a Sun Fire E15K and an IBM HPCx cluster. Although the implementation is in principal, the same for both supercomputers, few modifications had to be done for successful porting of the code on the IBM HPCx cluster. Performance analysis and parallel optimization was done next. Results from bench marking experiments will be presented in this paper. Another set of experiments was carried out in order to investigate the sensitivity of the model to variation of some chemical rate constants in the chemical submodel. Certain modifications of the code were necessary to be done in accordance with this task. The obtained results will be used for further sensitivity analysis Studies by using Monte Carlo simulation.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Kalpana Very High Resolution Radiometer (VHRR) water vapour (WV) channel is very similar to the WV channel of the Meteosat Visible and Infrared Radiation Imager (MVIRI) on Meteosat-7, and both satellites observe the Indian subcontinent. Thus it is possible to compare the performance of VHRR and MVIRI in numerical weather prediction (NWP) models. In order to do so, the impact of Kalpana- and Meteosat-7-measured WV radiances was evaluated using analyses and forecasts of moisture, temperature, geopotential and winds, using the European Centre for Medium-range Weather Forecasts (ECMWF) NWP model. Compared with experiments using Meteosat-7, the experiments using Kalpana WV radiances show a similar fit to all observations and produce very similar forecasts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

What is the relation between competition and performance? The present research addresses this important multidisciplinary question by conducting a meta-analysis of existing empirical work and by proposing a new conceptual model—the opposing processes model of competition and performance. This model was tested by conducting an additional meta-analysis and 3 new empirical studies. The first meta-analysis revealed that there is no noteworthy relation between competition and performance. The second meta-analysis showed, in accord with the opposing processes model, that the absence of a direct effect is the result of inconsistent mediation via achievement goals: Competition prompts performance-approach goals which, in turn, facilitate performance; and competition also prompts performance-avoidance goals which, in turn, undermine performance. These same direct and mediational findings were also observed in the 3 new empirical studies (using 3 different conceptualizations of competition and attending to numerous control variables). Our findings provide both interpretational clarity regarding past research and conceptual guidance regarding future research on the competition–performance relation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.