16 resultados para Context models
em Aston University Research Archive
Resumo:
Using the core aspects of five main models of human resource management (HRM), this article investigates the dominant HRM practices in the Indian manufacturing sector. The evaluation is conducted in the context of the recently liberalized economic environment. In response to ever-increasing levels of globalization of business, the article initially highlights the need for more cross-national comparative HRM research. Then it briefly analyzes the five models of HRM (namely, the `Matching model'; the `Harvard model'; the `Contextual model'; the `5-P model'; and the `European model') and identifies the main research questions emerging from these that could be used to reveal and highlight the HRM practices in different national/regional settings. The findings of the research are based on a questionnaire survey of 137 large Indian firms and 24 in-depth interviews in as many firms. The examination not only helped to present the scenario of HRM practices in the Indian context but also the logic dictating the presence of such practices. The article contributes to the fields of cross-national HRM and industrial relations research. It also has key messages for policy makers and opens avenues for further research.
Resumo:
The thesis reports of a study into the effect upon organisations of co-operative information systems (CIS) incorporating flexible communications, group support and group working technologies. A review of the literature leads to the development of a model of effect based upon co-operative business tasks. CIS have the potential to change how co-operative business tasks are carried out and their principal effect (or performance) may therefore be evaluated by determining to what extent they are being employed to perform these tasks. A significant feature of CIS use identified is the extent to which they may be designed to fulfil particular tasks, or by contrast, may be applied creatively by users in an emergent fashion to perform tasks. A research instrument is developed using a survey questionnaire to elicit users judgements of the extent to which a CIS is employed to fulfil a range of co-operative tasks. This research instrument is applied to a longitudinal study of Novell GroupWise introduction at Northamptonshire County Council during which qualitative as well as quantitative data were gathered. A method of analysis of questionnaire results using principles from fuzzy mathematics and artificial intelligence is developed and demonstrated. Conclusions from the longitudinal study include the importance of early experiences in setting patterns for use for CIS, the persistence of patterns of use over time and the dominance of designed usage of the technology over emergent use.
Resumo:
Most prior new product diffusion (NPD) models do not specifically consider the role of the business model in the process. However, the context of NPD in today's market has been changed dramatically by the introduction of new business models. Through reinterpretation and extension, this paper empirically examines the feasibility of applying Bass-type NPD models to products that are commercialized by different business models. More specifically, the results and analysis of this study consider the subscription business model for service products, the freemium business model for digital products, and a pre-paid and post-paid business model that is widely used by mobile network providers. The paper offers new insights derived from implementing the models in real-life cases. It also highlights three themes for future research.
Resumo:
This paper examines the strategic implications of resource allocation models (RAMs). Four interrelated aspects of resource allocation are discussed: degree of centralisation, locus of strategic direction, cross-subsidy, and locus of control. The paper begins with a theoretical overview of these concepts, locating the study in the contexts of both strategic management literature and the university. The concepts are then examined empirically, drawing upon a longitudinal study of three UK universities, Warwick, London School of Economics and Political Science (LSE), and Oxford Brookes. Findings suggest that RAMs are historically and culturally situated within the context of each university and this is associated with different patterns of strategic direction and forms of strategic control. As such, the RAM in use may be less a matter of best practice than one of internal fit. The paper concludes with some implications for theory and practice by discussing the potential trajectories of each type of RAM.
Resumo:
Benchmarking exercises have become increasingly popular within the sphere of regional policy making. However, most exercises are restricted to comparing regions within a particular continental bloc or nation.This article introduces the World Knowledge Competitiveness Index (WKCI), which is one of the very few benchmarking exercises established to compare regions across continents.The article discusses the formulation of the WKCI and analyzes the results of the most recent editions.The results suggest that there are significant variations in the knowledge-based regional economic development models at work across the globe. Further analysis also indicates that Silicon Valley, as the highest ranked WKCI region, holds a unique economic position among the globe’s leading regions. However, significant changes in the sources of regional competitiveness are evolving as a result of the emergence of new regional hot spots in Asia. It is concluded that benchmarking is imperative to the learning process of regional policy making.
Resumo:
This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.
Resumo:
A recent method for phase equilibria, the AGAPE method, has been used to predict activity coefficients and excess Gibbs energy for binary mixtures with good accuracy. The theory, based on a generalised London potential (GLP), accounts for intermolecular attractive forces. Unlike existing prediction methods, for example UNIFAC, the AGAPE method uses only information derived from accessible experimental data and molecular information for pure components. Presently, the AGAPE method has some limitations, namely that the mixtures must consist of small, non-polar compounds with no hydrogen bonding, at low moderate pressures and at conditions below the critical conditions of the components. Distinction between vapour-liquid equilibria and gas-liquid solubility is rather arbitrary and it seems reasonable to extend these ideas to solubility. The AGAPE model uses a molecular lattice-based mixing rule. By judicious use of computer programs a methodology was created to examine a body of experimental gas-liquid solubility data for gases such as carbon dioxide, propane, n-butane or sulphur hexafluoride which all have critical temperatures a little above 298 K dissolved in benzene, cyclo-hexane and methanol. Within this methodology the value of the GLP as an ab initio combining rule for such solutes in very dilute solutions in a variety of liquids has been tested. Using the GLP as a mixing rule involves the computation of rotationally averaged interactions between the constituent atoms, and new calculations have had to be made to discover the magnitude of the unlike pair interactions. These numbers have been seen as significant in their own right in the context of the behaviour of infinitely-dilute solutions. A method for extending this treatment to "permanent" gases has also been developed. The findings from the GLP method and from the more general AGAPE approach have been examined in the context of other models for gas-liquid solubility, both "classical" and contemporary, in particular those derived from equations-of-state methods and from reference solvent methods.
Resumo:
The results of an investigation into how stressors interact with the action of serotonergic agents in animal models of anxiety are presented. Water deprivation and restraint both increased plasma corticosterone concentrations and elevated 5-HT turnover. In the elevated X-maze, water deprivation had a duration-dependent "anxiolytic" effect. The effect of restraint was dependent on the duration of restraint and was to inhibit maze exploration. Water-deprivation did not influence the action of diazepam or any 5-HT1A ligand in the X-maze. Restraint switched the "anxiogenic" effect of 8-0H-DPAT to either "anxiolytic" or inactive, depending on the time after the restraint when testing was performed. The Vogel conflict test detected an "anxiolytic" "anxiolytic"V"anxiolytic""anxiolytic" effect of buspirone which was additive with "anxiolytic" effects of pindolol and propranolol. Diazepam and fluoxetine were also active, but 8-0H-DPAT, ipsapirone, gepirone and yohimbine were inactive. In the elevated X-maze, "anxiogenic" responses to picrotoxin, flumazenil, RU 24969, CGS 12066B, fluoxetine and 8-0H-DPAT were detected. Other 5-HT1A ligands were inactive. Diazepam and corticosterone had "anxiolytic" effects. Increasing light intensity did not change behaviour on the elevated X-maze, but was able to reverse the effect of 8- OH-DPAT to an "anxiolytic" action. This effect was attributed to a presynaptic mechanism, because it was abolished by pCPA. The occurence of different behaviours in different reglons of the maze was shown to be susceptible to modulation by "anxiolytic" and "anxiogenic" drugs. These results are discussed in the context of there being at least two separate 5-HT mechanisms which are involved in the control of anxiety.
Resumo:
Formative measurement has seen increasing acceptance in organizational research since the turn of the 21st Century. However, in more recent times, a number of criticisms of the formative approach have appeared. Such work argues that formatively-measured constructs are empirically ambiguous and thus flawed in a theory-testing context. The aim of the present paper is to examine the underpinnings of formative measurement theory in light of theories of causality and ontology in measurement in general. In doing so, a thesis is advanced which draws a distinction between reflective, formative, and causal theories of latent variables. This distinction is shown to be advantageous in that it clarifies the ontological status of each type of latent variable, and thus provides advice on appropriate conceptualization and application. The distinction also reconciles in part both recent supportive and critical perspectives on formative measurement. In light of this, advice is given on how most appropriately to model formative composites in theory-testing applications, placing the onus on the researcher to make clear their conceptualization and operationalisation.
Resumo:
Context/Motivation - Different modeling techniques have been used to model requirements and decision-making of self-adaptive systems (SASs). Specifically, goal models have been prolific in supporting decision-making depending on partial and total fulfilment of functional (goals) and non-functional requirements (softgoals). Different goalrealization strategies can have different effects on softgoals which are specified with weighted contribution-links. The final decision about what strategy to use is based, among other reasons, on a utility function that takes into account the weighted sum of the different effects on softgoals. Questions/Problems - One of the main challenges about decisionmaking in self-adaptive systems is to deal with uncertainty during runtime. New techniques are needed to systematically revise the current model when empirical evidence becomes available from the deployment. Principal ideas/results - In this paper we enrich the decision-making supported by goal models by using Dynamic Decision Networks (DDNs). Goal realization strategies and their impact on softgoals have a correspondence with decision alternatives and conditional probabilities and expected utilities in the DDNs respectively. Our novel approach allows the specification of preferences over the softgoals and supports reasoning about partial satisfaction of softgoals using probabilities. We report results of the application of the approach on two different cases. Our early results suggest the decision-making process of SASs can be improved by using DDNs. © 2013 Springer-Verlag.
Resumo:
Models at runtime can be defined as abstract representations of a system, including its structure and behaviour, which exist in tandem with the given system during the actual execution time of that system. Furthermore, these models should be causally connected to the system being modelled, offering a reflective capability. Significant advances have been made in recent years in applying this concept, most notably in adaptive systems. In this paper we argue that a similar approach can also be used to support the dynamic generation of software artefacts at execution time. An important area where this is relevant is the generation of software mediators to tackle the crucial problem of interoperability in distributed systems. We refer to this approach as emergent middleware, representing a fundamentally new approach to resolving interoperability problems in the complex distributed systems of today. In this context, the runtime models are used to capture meta-information about the underlying networked systems that need to interoperate, including their interfaces and additional knowledge about their associated behaviour. This is supplemented by ontological information to enable semantic reasoning. This paper focuses on this novel use of models at runtime, examining in detail the nature of such runtime models coupled with consideration of the supportive algorithms and tools that extract this knowledge and use it to synthesise the appropriate emergent middleware.
Resumo:
Uncertainty can be defined as the difference between information that is represented in an executing system and the information that is both measurable and available about the system at a certain point in its life-time. A software system can be exposed to multiple sources of uncertainty produced by, for example, ambiguous requirements and unpredictable execution environments. A runtime model is a dynamic knowledge base that abstracts useful information about the system, its operational context and the extent to which the system meets its stakeholders' needs. A software system can successfully operate in multiple dynamic contexts by using runtime models that augment information available at design-time with information monitored at runtime. This chapter explores the role of runtime models as a means to cope with uncertainty. To this end, we introduce a well-suited terminology about models, runtime models and uncertainty and present a state-of-the-art summary on model-based techniques for addressing uncertainty both at development- and runtime. Using a case study about robot systems we discuss how current techniques and the MAPE-K loop can be used together to tackle uncertainty. Furthermore, we propose possible extensions of the MAPE-K loop architecture with runtime models to further handle uncertainty at runtime. The chapter concludes by identifying key challenges, and enabling technologies for using runtime models to address uncertainty, and also identifies closely related research communities that can foster ideas for resolving the challenges raised. © 2014 Springer International Publishing.
Resumo:
Latent topics derived by topic models such as Latent Dirichlet Allocation (LDA) are the result of hidden thematic structures which provide further insights into the data. The automatic labelling of such topics derived from social media poses however new challenges since topics may characterise novel events happening in the real world. Existing automatic topic labelling approaches which depend on external knowledge sources become less applicable here since relevant articles/concepts of the extracted topics may not exist in external sources. In this paper we propose to address the problem of automatic labelling of latent topics learned from Twitter as a summarisation problem. We introduce a framework which apply summarisation algorithms to generate topic labels. These algorithms are independent of external sources and only rely on the identification of dominant terms in documents related to the latent topic. We compare the efficiency of existing state of the art summarisation algorithms. Our results suggest that summarisation algorithms generate better topic labels which capture event-related context compared to the top-n terms returned by LDA. © 2014 Association for Computational Linguistics.
Resumo:
In recent years, there has been an increas-ing interest in learning a distributed rep-resentation of word sense. Traditional context clustering based models usually require careful tuning of model parame-ters, and typically perform worse on infre-quent word senses. This paper presents a novel approach which addresses these lim-itations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned represen-tations outperform the publicly available embeddings on 2 out of 4 metrics in the word similarity task, and 6 out of 13 sub tasks in the analogical reasoning task.
Resumo:
In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word senses. This paper presents a novel approach which addresses these limitations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned representations outperform the publicly available embeddings on half of the metrics in the word similarity task, 6 out of 13 sub tasks in the analogical reasoning task, and gives the best overall accuracy in the word sense effect classification task, which shows the effectiveness of our proposed distributed distribution learning model.