177 resultados para Competitive landscape
Resumo:
Strategy is a contested concept. The generic literature is characterized by a diverse range of competing theories and alternative perspectives. Traditional models of the competitive strategy of construction firms have tended to focus on exogenous factors. In contrast, the resource-based view of strategic management emphasizes the importance of endogenous factors. The more recently espoused concept of dynamic capabilities extends consideration beyond static resources to focus on the ability of firms to reconfigure their operating routines to enable responses to changing environments. The relevance of the dynamics capabilities framework to the construction sector is investigated through an exploratory case study of a regional contractor. The focus on how firms continuously adapt to changing environments provides new insights into competitive strategy in the construction sector. Strong support is found for the importance of path dependency in shaping strategic choice. The case study further suggests that strategy is a collective endeavour enacted by a loosely defined group of individual actors. Dynamic capabilities are characterized by an empirical elusiveness and as such are best construed as situated practices embedded within a social and physical context.
Resumo:
Competitive Dialogue (CD) is a new contract award procedure of the European Community (EC). It is set out in Article 29 of the 'Public Sector Directive' 2004/18/EC. Over the last decades, projects were becoming more and more complex, and the existing EC procedures were no longer suitable to procure those projects. The call for a new procedure resulted in CD. This paper describes how the Directive has been implemented into the laws of two member states: the UK and the Netherlands. In order to implement the Directive, both lawmakers have set up a new and distinct piece of legislation. In each case, large parts of the Directive’s content have been repeated ‘word for word’; only minor parts have been reworded and/or restructured. In the next part of the paper, the CD procedure is examined in different respects. First, an overview is given on the different EC contract award procedures (open, restricted, negotiated, CD) and awarding methods (lowest price and Most Economically Advantageous Tender, MEAT). Second, the applicability of CD is described: Among other limitations, CD can only be applied to public contracts for works, supplies, and services, and this scope of application is further restricted by the exclusion of certain contract types. One such exclusion concerns services concessions. This means that PPP contracts which are set up as services concessions cannot be awarded by CD. The last two parts of the paper pertain to the main features of the CD procedure – from ‘contract notice’ to ‘contract award’ – and the advantages and disadvantages of the procedure. One advantage is that the dialogue allows the complexity of the project to be disentangled and clarified. Other advantages are the stimulation of innovation and creativity. These advantages are set against the procedure’s disadvantages, which include high transaction costs and a perceived hindrance of innovation (due to an ambiguity between transparency and fair competition). It is concluded that all advantages and disadvantages are related to one of three elements: communication, competition, and/or structure of the procedure. Further research is needed to find out how these elements are related.
Resumo:
Elevated levels of low-density-lipoprotein cholesterol (LDL-C) in the plasma are a well-established risk factor for the development of coronary heart disease. Plasma LDL-C levels are in part determined by the rate at which LDL particles are removed from the bloodstream by hepatic uptake. The uptake of LDL by mammalian liver cells occurs mainly via receptor-mediated endocytosis, a process which entails the binding of these particles to specific receptors in specialised areas of the cell surface, the subsequent internalization of the receptor-lipoprotein complex, and ultimately the degradation and release of the ingested lipoproteins' constituent parts. We formulate a mathematical model to study the binding and internalization (endocytosis) of LDL and VLDL particles by hepatocytes in culture. The system of ordinary differential equations, which includes a cholesterol-dependent pit production term representing feedback regulation of surface receptors in response to intracellular cholesterol levels, is analysed using numerical simulations and steady-state analysis. Our numerical results show good agreement with in vitro experimental data describing LDL uptake by cultured hepatocytes following delivery of a single bolus of lipoprotein. Our model is adapted in order to reflect the in vivo situation, in which lipoproteins are continuously delivered to the hepatocyte. In this case, our model suggests that the competition between the LDL and VLDL particles for binding to the pits on the cell surface affects the intracellular cholesterol concentration. In particular, we predict that when there is continuous delivery of low levels of lipoproteins to the cell surface, more VLDL than LDL occupies the pit, since VLDL are better competitors for receptor binding. VLDL have a cholesterol content comparable to LDL particles; however, due to the larger size of VLDL, one pit-bound VLDL particle blocks binding of several LDLs, and there is a resultant drop in the intracellular cholesterol level. When there is continuous delivery of lipoprotein at high levels to the hepatocytes, VLDL particles still out-compete LDL particles for receptor binding, and consequently more VLDL than LDL particles occupy the pit. Although the maximum intracellular cholesterol level is similar for high and low levels of lipoprotein delivery, the maximum is reached more rapidly when the lipoprotein delivery rates are high. The implications of these results for the design of in vitro experiments is discussed.
Resumo:
Space applications are challenged by the reliability of parallel computing systems (FPGAs) employed in space crafts due to Single-Event Upsets. The work reported in this paper aims to achieve self-managing systems which are reliable for space applications by applying autonomic computing constructs to parallel computing systems. A novel technique, 'Swarm-Array Computing' inspired by swarm robotics, and built on the foundations of autonomic and parallel computing is proposed as a path to achieve autonomy. The constitution of swarm-array computing comprising for constituents, namely the computing system, the problem / task, the swarm and the landscape is considered. Three approaches that bind these constituents together are proposed. The feasibility of one among the three proposed approaches is validated on the SeSAm multi-agent simulator and landscapes representing the computing space and problem are generated using the MATLAB.
Resumo:
The Learning Landscape project described here is known as RedGloo and has several objectives; among others it aims to help students to make friends, contacts and join communities based on interests and competencies. RedGloo provides a space where students can support each other with personal, academic and career development, sharing insights gained from extracurricular activities as well as their degree programmes. It has shown tendencies of becoming a learning community with several communities of practice.
Resumo:
Many evolutionary algorithm applications involve either fitness functions with high time complexity or large dimensionality (hence very many fitness evaluations will typically be needed) or both. In such circumstances, there is a dire need to tune various features of the algorithm well so that performance and time savings are optimized. However, these are precisely the circumstances in which prior tuning is very costly in time and resources. There is hence a need for methods which enable fast prior tuning in such cases. We describe a candidate technique for this purpose, in which we model a landscape as a finite state machine, inferred from preliminary sampling runs. In prior algorithm-tuning trials, we can replace the 'real' landscape with the model, enabling extremely fast tuning, saving far more time than was required to infer the model. Preliminary results indicate much promise, though much work needs to be done to establish various aspects of the conditions under which it can be most beneficially used. A main limitation of the method as described here is a restriction to mutation-only algorithms, but there are various ways to address this and other limitations.