6 resultados para Global model

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops and tests a learning organization model derived from HRM and dynamic capability literatures in order to ascertain the model's applicability across divergent global contexts. We define a learning organization as one capable of achieving on-going strategic renewal, arguing based on dynamic capability theory that the model has three necessary antecedents: HRM focus, developmental orientation and customer-facing remit. Drawing on a sample comprising nearly 6000 organizations across 15 countries, we show that learning organizations exhibit higher performance than their less learning-inclined counterparts. We also demonstrate that innovation fully mediates the relationship between our conceptualization of the learning organization and organizational performance in 11 of the 15 countries we examined. It is the first time in our knowledge that these questions have been tested in a major, cross-global study, and our work contributes to both HRM and dynamic capability literatures, especially where the focus is the applicability of best practice parameters across national boundaries.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The article deals with the CFD modelling of fast pyrolysis of biomass in an Entrained Flow Reactor (EFR). The Lagrangian approach is adopted for the particle tracking, while the flow of the inert gas is treated with the standard Eulerian method for gases. The model includes the thermal degradation of biomass to char with simultaneous evolution of gases and tars from a discrete biomass particle. The chemical reactions are represented using a two-stage, semi-global model. The radial distribution of the pyrolysis products is predicted as well as their effect on the particle properties. The convective heat transfer to the surface of the particle is computed using the Ranz-Marshall correlation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The fluid–particle interaction and the impact of different heat transfer conditions on pyrolysis of biomass inside a 150 g/h fluidised bed reactor are modelled. Two different size biomass particles (350 µm and 550 µm in diameter) are injected into the fluidised bed. The different biomass particle sizes result in different heat transfer conditions. This is due to the fact that the 350 µm diameter particle is smaller than the sand particles of the reactor (440 µm), while the 550 µm one is larger. The bed-to-particle heat transfer for both cases is calculated according to the literature. Conductive heat transfer is assumed for the larger biomass particle (550 µm) inside the bed, while biomass–sand contacts for the smaller biomass particle (350 µm) were considered unimportant. The Eulerian approach is used to model the bubbling behaviour of the sand, which is treated as a continuum. Biomass reaction kinetics is modelled according to the literature using a two-stage, semi-global model which takes into account secondary reactions. The particle motion inside the reactor is computed using drag laws, dependent on the local volume fraction of each phase. FLUENT 6.2 has been used as the modelling framework of the simulations with the whole pyrolysis model incorporated in the form of User Defined Function (UDF).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis introduces a flexible visual data exploration framework which combines advanced projection algorithms from the machine learning domain with visual representation techniques developed in the information visualisation domain to help a user to explore and understand effectively large multi-dimensional datasets. The advantage of such a framework to other techniques currently available to the domain experts is that the user is directly involved in the data mining process and advanced machine learning algorithms are employed for better projection. A hierarchical visualisation model guided by a domain expert allows them to obtain an informed segmentation of the input space. Two other components of this thesis exploit properties of these principled probabilistic projection algorithms to develop a guided mixture of local experts algorithm which provides robust prediction and a model to estimate feature saliency simultaneously with the training of a projection algorithm.Local models are useful since a single global model cannot capture the full variability of a heterogeneous data space such as the chemical space. Probabilistic hierarchical visualisation techniques provide an effective soft segmentation of an input space by a visualisation hierarchy whose leaf nodes represent different regions of the input space. We use this soft segmentation to develop a guided mixture of local experts (GME) algorithm which is appropriate for the heterogeneous datasets found in chemoinformatics problems. Moreover, in this approach the domain experts are more involved in the model development process which is suitable for an intuition and domain knowledge driven task such as drug discovery. We also derive a generative topographic mapping (GTM) based data visualisation approach which estimates feature saliency simultaneously with the training of a visualisation model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.