884 resultados para Popularity.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Challenges of returnable transport equipment (RTE) management continue to heighten as the popularity of their usage magnifies. Logistics companies are investigating the implementation of radio-frequency identification (RFID) technology to alleviate problems such as loss prevention and stock reduction. However, the research within this field is limited and fails to fully explore with depth, the wider network improvements that can be made to optimize the supply chain through efficient RTE management. This paper, investigates the nature of RTE network management building on current research and practices, filling a gap in the literature, through the investigation of a product-centric approach where the paradigms of “intelligent products” and “autonomous objects” are explored. A network optimizing approach with RTE management is explored, encouraging advanced research development of the RTE paradigm to align academic research with problematic areas in industry. Further research continues with the development of an agent-based software system, ready for application to a real-case study distribution network, producing quantitative results for further analysis. This is pivotal on the endeavor to developing agile support systems, fully utilizing an information-centric environment and encouraging RTE to be viewed as critical network optimizing tools rather than costly waste.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scenario Planning is a strategy tool with growing popularity in both academia and practical situations. Current practices in the teaching of scenario planning are largely based on existing literature which utilises scenario planning to develop strategies for the future, primarily considering the assessment of perceived macro-external environmental uncertainties. However there is a body of literature hitherto ignored by scenario planning researchers, which suggests that Perceived Environmental Uncertainty (PEU) influences micro-external or industrial environmental as well as the internal environment of the organisation. This paper provides a review of the most dominant theories on scenario planning process, demonstrates the need to consider PEU theory within scenario planning and presents how this can be done. The scope of this paper is to enhance the scenario planning process as a tool taught for Strategy Development. A case vignette is developed based on published scenarios to demonstrate the potential utilisation of the proposed process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge has been a subject of interest and inquiry for thousands of years since at least the time of the ancient Greeks, and no doubt even before that. “What is knowledge” continues to be an important topic of discussion in philosophy. More recently, interest in managing knowledge has grown in step with the perception that increasingly we live in a knowledge-based economy. Drucker (1969) is usually credited as being the first to popularize the knowledge-based economy concept by linking the importance of knowledge with rapid technological change in Drucker (1969). Karl Wiig coined the term knowledge management (hereafter KM) for a NATO seminar in 1986, and its popularity took off following the publication of Nonaka and Takeuchi’s book “The Knowledge Creating Company” (Nonaka & Takeuchi, 1995). Knowledge creation is in fact just one of many activities involved in KM. Others include sharing, retaining, refining, and using knowledge. There are many such lists of activities (Holsapple & Joshi, 2000; Probst, Raub, & Romhardt, 1999; Skyrme, 1999; Wiig, De Hoog, & Van der Spek, 1997). Both academic and practical interest in KM has continued to increase throughout the last decade. In this article, first the different types of knowledge are outlined, then comes a discussion of various routes by which knowledge management can be implemented, advocating a process-based route. An explanation follows of how people, processes, and technology need to fit together for effective KM, and some examples of this route in use are given. Finally, there is a look towards the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this project was to carry out an investigastion into suitable alternatives to gasoline for use in modern automobiles. The fuel would provide the western world with a means of extending the natural gasoline resources and the third world a way of cutting down their dependence on the oil producing countries for their energy supply. Alcohols, namely methanol and ethanol, provide this solution. They can be used as gasoline extenders or as fuels on their own.In order to fulfil the aims of the project a literature study was carried out to investigate methods and costs of producing these fuels. An experimental programme was then set up in which the performance of the alcohols was studied on a conventional engine. The engine used for this purpose was the Fiat 127 930cc four cylinder engine. This engine was used because of its popularity in the European countries. The Weber fixed jet carburettor, since it was designed to be used with gasoline, was adapted so that the alcohol fuels and the blends could be used in the most efficient way. This was mainly to take account of the lower heat content of the alcohols. The adaptation of the carburettor was in the form of enlarging the main metering jet. Allowances for the alcohol's lower specfic gravity were made during fuel metering.Owing to the low front end volatility of methanol and ethanol, it was expected that `start up' problems would occur. An experimental programme was set up to determine the temperature range for a minimum required percentage `take off' that would ease start-up since it was determined that a `take off' of about 5% v/v liquid in the vapour phase would be sufficient for starting. Additions such as iso-pentane and n-pentane were used to improve the front end volatility. This proved to be successful.The lower heat content of the alcohol fuels also meant that a greater charge of fuel would be required. This was seen to pose further problems with fuel distribution from the carburettor to the individual cylinders on a multicylinder engine. Since it was not possible to modify the existing manifold on the Fiat 127 engine, experimental tests on manifold geometry were carried out using the Ricardo E6 single cylinder variable compression engine. Results from these tests showed that the length, shape and cross-sectional area of the manifold play an important part in the distribution of the fuel entering the cylinder, ie. vapour phase, vapour/small liquid droplet/liquid film phase, vapour/large liquid droplet/liquid film phase etc.The solvent properties of the alcohols and their greater electrical conductivity suggested that the materials used on the engine would be prone to chemical attack. In order to determine the type and rate of chemical attack, an experimental programme was set up whereby carburettor and other components were immersed in the alcohols and in blends of alcohol with gasoline. The test fuels were aerated and in some instances kept at temperatures ranging from 50oC to 90oC. Results from these tests suggest that not all materials used in the conventional engine are equally suitable for use with alcohols and alcohol/gasoline blends. Aluminium for instance was severely attacked by methanol causing pitting and pin-holing in the surface.In general this whole experimental programme gave valuable information on the acceptability of substitute fuels. While the long term effects of alcohol use merit further study, it is clear that methanol and ethanol will be increasingly used in place of gasoline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of researchers have investigated the impact of network architecture on the performance of artificial neural networks. Particular attention has been paid to the impact on the performance of the multi-layer perceptron of architectural issues, and the use of various strategies to attain an optimal network structure. However, there are still perceived limitations with the multi-layer perceptron and networks that employ a different architecture to the multi-layer perceptron have gained in popularity in recent years, particularly, networks that implement a more localised solution, where the solution in one area of the problem space does not impact, or has a minimal impact, on other areas of the space. In this study, we discuss the major architectural issues affecting the performance of a multi-layer perceptron, before moving on to examine in detail the performance of a new localised network, namely the bumptree. The work presented here examines the impact on the performance of artificial neural networks of employing alternative networks to the long established multi-layer perceptron. In particular, networks that impose a solution where the impact of each parameter in the final network architecture has a localised impact on the problem space being modelled are examined. The alternatives examined are the radial basis function and bumptree neural networks, and the impact of architectural issues on the performance of these networks is examined. Particular attention is paid to the bumptree, with new techniques for both developing the bumptree structure and employing this structure to classify patterns being examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been substantial research into the role of distance learning in education. Despite the rise in the popularity and practice of this form of learning in business, there has not been a parallel increase in the amount of research carried out in this field. An extensive investigation was conducted into the entire distance learning system of a multi-national company with particular emphasis on the design, implementation and evaluation of the materials. In addition, the performance and attitudes of trainees were examined. The results of a comparative study indicated that trainees using distance learning had significantly higher test scores than trainees using conventional face-to-face training. The influence of the previous distance learning experience, educational background and selected study environment of trainees was investigated. Trainees with previous experience of distance learning were more likely to complete the course and with significantly higher test scores than trainees with no previous experience. The more advanced the educational background of trainees, the greater the likelihood of their completing the course, although there was no significant difference in the test scores achieved. Trainees preferred to use the materials at home and those opting to study in this environment scored significantly higher than those studying in the office, the study room at work or in a combination of environments. The influence of learning styles (Kolb, 1976) was tested. The results indicated that the convergers had the greatest completion rates and scored significantly higher than trainees with the assimilator, accommodator and diverger learning styles. The attitudes of the trainees, supervisors and trainers were examined using questionnaire, interview and discussion techniques. The findings highlighted the potential problems of lack of awareness and low motivation which could prove to be major obstacles to the success of distance learning in business.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the 1960s the benefits of government regulation of technology were believed to outweigh any costs. But recent studies have claimed that regulation has negative effects on innovation, health and consumer choice. This case study on food colours examines such claims. EFFECTS ON HEALTH were measured by allocating a hazard rating to each colour. The negative list of 1925 removed three harmful colours which were rapidly replaced, so the benefits were short-lived. Had a proposed ban been adopted in the 1860s it would have prevented many years exposure to hazardous mineral colours. The positive list of 1957 reduced the proportion of harmful coal tar dyes from 54% of the total to 20%. Regulations brought a greater reduction in hazard levels than voluntary trade action. Delays in the introduction of a positive list created a significant hazard burden. EFFECTS ON INNOVATION were assessed from patents and discovery dates. Until the 1950s food colours were adopted from textile colours. The major period of innovation for coal tar colours was between 1856 and 1910, finishing well before regulations were made in 1957, so regulations cannot be blamed for the decline. Regulations appear to have spurred the development of at least one new coal tar dye, and many new plant colours, creating a new sector of the dye industry. EFFECTS ON CONSUMER CHOICE were assessed by case studies. Coloured milk, for example, was banned despite its popularity. Regulations have restricted choice, but have removed from the market foods that were nutritionally impoverished and poor value for money. Compositional regulations provided health protection because they reduced total exposure to colours from certain staple foods. Restricting colours to a smaller range of foods would be an effective way of coping with problems of quality and imperfect toxicological knowledge today.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite its increasing popularity, much intercultural training is not developed with the same level of rigour as training in other areas. Further, research on intercultural training has brought inconsistent results about the effectiveness of such training. This PhD thesis develops a rigorous model of intercultural training and applies it to the preparation of British students going on work/study placements in France and Germany. It investigates the reasons for inconsistent training success by looking at the cognitive learning processes in intercultural training, relating them to training goals, and by examining the short- and long-term transfer of intercultural training into real-life encounters with people from other cultures. Two cognitive trainings based on critical incidents were designed for online delivery. The training content relied on cultural practice dimensions from the GWBE study (House, Hanges, Javidan, Dorfman & Gupta, 2004). Of the two trainings, the 'singlemode training' aimed to develop declarative knowledge, which is necessary to analyse and understand other cultures. The 'concurrent training' aimed to develop declarative and procedural knowledge, which is needed to develop skills for dealing with difficult situations in a culturally appropriate way. Participants (N-48) were randomly assigned to one of the two training conditions. Declarative learning appeared as a process of steady knowledge increase, while procedural learning involved cognitive re-categorisation rather than knowledge increase. In a negotiation role play with host-country nationals directly after the online training, participants of the concurrent training exhibited a more initiative negotiation style than participants of the single-mode training. Comparing cultural adjustment and performance of training participants during their time abroad with an untrained control group, participants of the concurrent training showed the qualitatively best development in adjustment and performance. Besides intercultural training, multicultural personality traits were assessed and proved to be a powerful predictor of adjustment and, indirectly, of performance abroad.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the increasing popularity of research on intercultural preparation and its effectiveness, research on training for inpatriates has not been developed with the same level of rigour as research on training for expatriates. Furthermore, research on intercultural training hardly ever includes the aspect of preparing for the corporate culture of a company. For expatriates coming from headquarters’ national culture and equipped with a good knowledge of headquarters’ corporate culture, it might be sufficient to address only the national culture of the location abroad. But can the same be said for inpatriates coming from a foreign subsidiary? Therefore the qualitative research of my thesis was aimed at finding out if intercultural training programmes that address only the national culture of the host country are sufficient to prepare inpatriates for working at headquarters. A case study using a German multinational company has been conducted in order to find out what kind of problems and irritations inpatriates at the company’s headquarters perceive at work. In order to determine whether the findings are related to the national or the corporate culture, Hall’s and Hofstede’s approaches to culture were used. The interview analysis produced the following conclusion: Although the researched company promotes standardised worldwide corporate guidelines, there are many differences between headquarters and subsidiaries regarding the interpretation and realisation of these guidelines. These differences cause irritation, confusion and problems for the inpatriates. Therefore an effective intercultural preparation for inpatriates should be tailor-made and take into account the aspect of corporate culture, as well as the specific roles and functions of inpatriates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To exploit the popularity of TCP as still the dominant sender and protocol of choice for transporting data reliably across the heterogeneous Internet, this thesis explores end-to-end performance issues and behaviours of TCP senders when transferring data to wireless end-users. The theme throughout is on end-users located specifically within 802.11 WLANs at the edges of the Internet, a largely untapped area of work. To exploit the interests of researchers wanting to study the performance of TCP accurately over heterogeneous conditions, this thesis proposes a flexible wired-to-wireless experimental testbed that better reflects conditions in the real-world. To exploit the transparent functionalities between TCP in the wired domain and the IEEE 802.11 WLAN protocols, this thesis proposes a more accurate methodology for gauging the transmission and error characteristics of real-world 802.11 WLANs. It also aims to correlate any findings with the functionality of fixed TCP senders. To exploit the popularity of Linux as a popular operating system for many of the Internet’s data servers, this thesis studies and evaluates various sender-side TCP congestion control implementations within the recent Linux v2.6. A selection of the implementations are put under systematic testing using real-world wired-to-wireless conditions in order to screen and present a viable candidate/s for further development and usage in the modern-day heterogeneous Internet. Overall, this thesis comprises a set of systematic evaluations of TCP senders over 802.11 WLANs, incorporating measurements in the form of simulations, emulations, and through the use of a real-world-like experimental testbed. The goal of the work is to ensure that all aspects concerned are comprehensively investigated in order to establish rules that can help to decide under which circumstances the deployment of TCP is optimal i.e. a set of paradigms for advancing the state-of-the-art in data transport across the Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time after time… and aspect and mood. Over the last twenty five years, the study of time, aspect and - to a lesser extent - mood acquisition has enjoyed increasing popularity and a constant widening of its scope. In such a teeming field, what can be the contribution of this book? We believe that it is unique in several respects. First, this volume encompasses studies from different theoretical frameworks: functionalism vs generativism or function-based vs form-based approaches. It also brings together various sub-fields (first and second language acquisition, child and adult acquisition, bilingualism) that tend to evolve in parallel rather than learn from each other. A further originality is that it focuses on a wide range of typologically different languages, and features less studied languages such as Korean and Bulgarian. Finally, the book gathers some well-established scholars, young researchers, and even research students, in a rich inter-generational exchange, that ensures the survival but also the renewal and the refreshment of the discipline. The book at a glance The first part of the volume is devoted to the study of child language acquisition in monolingual, impaired and bilingual acquisition, while the second part focuses on adult learners. In this section, we will provide an overview of each chapter. The first study by Aviya Hacohen explores the acquisition of compositional telicity in Hebrew L1. Her psycholinguistic approach contributes valuable data to refine theoretical accounts. Through an innovating methodology, she gathers information from adults and children on the influence of definiteness, number, and the mass vs countable distinction on the constitution of a telic interpretation of the verb phrase. She notices that the notion of definiteness is mastered by children as young as 10, while the mass/count distinction does not appear before 10;7. However, this does not entail an adult-like use of telicity. She therefore concludes that beyond definiteness and noun type, pragmatics may play an important role in the derivation of Hebrew compositional telicity. For the second chapter we move from a Semitic language to a Slavic one. Milena Kuehnast focuses on the acquisition of negative imperatives in Bulgarian, a form that presents the specificity of being grammatical only with the imperfective form of the verb. The study examines how 40 Bulgarian children distributed in two age-groups (15 between 2;11-3;11, and 25 between 4;00 and 5;00) develop with respect to the acquisition of imperfective viewpoints, and the use of imperfective morphology. It shows an evolution in the recourse to expression of force in the use of negative imperatives, as well as the influence of morphological complexity on the successful production of forms. With Yi-An Lin’s study, we concentrate both on another type of informant and of framework. Indeed, he studies the production of children suffering from Specific Language Impairment (SLI), a developmental language disorder the causes of which exclude cognitive impairment, psycho-emotional disturbance, and motor-articulatory disorders. Using the Leonard corpus in CLAN, Lin aims to test two competing accounts of SLI (the Agreement and Tense Omission Model [ATOM] and his own Phonetic Form Deficit Model [PFDM]) that conflicts on the role attributed to spellout in the impairment. Spellout is the point at which the Computational System for Human Language (CHL) passes over the most recently derived part of the derivation to the interface components, Phonetic Form (PF) and Logical Form (LF). ATOM claims that SLI sufferers have a deficit in their syntactic representation while PFDM suggests that the problem only occurs at the spellout level. After studying the corpus from the point of view of tense / agreement marking, case marking, argument-movement and auxiliary inversion, Lin finds further support for his model. Olga Gupol, Susan Rohstein and Sharon Armon-Lotem’s chapter offers a welcome bridge between child language acquisition and multilingualism. Their study explores the influence of intensive exposure to L2 Hebrew on the development of L1 Russian tense and aspect morphology through an elicited narrative. Their informants are 40 Russian-Hebrew sequential bilingual children distributed in two age groups 4;0 – 4;11 and 7;0 - 8;0. They come to the conclusion that bilingual children anchor their narratives in perfective like monolinguals. However, while aware of grammatical aspect, bilinguals lack the full form-function mapping and tend to overgeneralize the imperfective on the principles of simplicity (as imperfective are the least morphologically marked forms), universality (as it covers more functions) and interference. Rafael Salaberry opens the second section on foreign language learners. In his contribution, he reflects on the difficulty L2 learners of Spanish encounter when it comes to distinguishing between iterativity (conveyed with the use of the preterite) and habituality (expressed through the imperfect). He examines in turn the theoretical views that see, on the one hand, habituality as part of grammatical knowledge and iterativity as pragmatic knowledge, and on the other hand both habituality and iterativity as grammatical knowledge. He comes to the conclusion that the use of preterite as a default past tense marker may explain the impoverished system of aspectual distinctions, not only at beginners but also at advanced levels, which may indicate that the system is differentially represented among L1 and L2 speakers. Acquiring the vast array of functions conveyed by a form is therefore no mean feat, as confirmed by the next study. Based on the prototype theory, Kathleen Bardovi-Harlig’s chapter focuses on the development of the progressive in L2 English. It opens with an overview of the functions of the progressive in English. Then, a review of acquisition research on the progressive in English and other languages is provided. The bulk of the chapter reports on a longitudinal study of 16 learners of L2 English and shows how their use of the progressive expands from the prototypical uses of process and continuousness to the less prototypical uses of repetition and future. The study concludes that the progressive spreads in interlanguage in accordance with prototype accounts. However, it suggests additional stages, not predicted by the Aspect Hypothesis, in the development from activities and accomplishments at least for the meaning of repeatedness. A similar theoretical framework is adopted in the following chapter, but it deals with a lesser studied language. Hyun-Jin Kim revisits the claims of the Aspect Hypothesis in relation to the acquisition of L2 Korean by two L1 English learners. Inspired by studies on L2 Japanese, she focuses on the emergence and spread of the past / perfective marker ¬–ess- and the progressive – ko iss- in the interlanguage of her informants throughout their third and fourth semesters of study. The data collected through six sessions of conversational interviews and picture description tasks seem to support the Aspect Hypothesis. Indeed learners show a strong association between past tense and accomplishments / achievements at the start and a gradual extension to other types; a limited use of past / perfective marker with states and an affinity of progressive with activities / accomplishments and later achievements. In addition, - ko iss– moves from progressive to resultative in the specific category of Korean verbs meaning wear / carry. While the previous contributions focus on function, Evgeniya Sergeeva and Jean-Pierre Chevrot’s is interested in form. The authors explore the acquisition of verbal morphology in L2 French by 30 instructed native speakers of Russian distributed in a low and high levels. They use an elicitation task for verbs with different models of stem alternation and study how token frequency and base forms influence stem selection. The analysis shows that frequency affects correct production, especially among learners with high proficiency. As for substitution errors, it appears that forms with a simple structure are systematically more frequent than the target form they replace. When a complex form serves as a substitute, it is more frequent only when it is replacing another complex form. As regards the use of base forms, the 3rd person singular of the present – and to some extent the infinitive – play this role in the corpus. The authors therefore conclude that the processing of surface forms can be influenced positively or negatively by the frequency of the target forms and of other competing stems, and by the proximity of the target stem to a base form. Finally, Martin Howard’s contribution takes up the challenge of focusing on the poorer relation of the TAM system. On the basis of L2 French data obtained through sociolinguistic interviews, he studies the expression of futurity, conditional and subjunctive in three groups of university learners with classroom teaching only (two or three years of university teaching) or with a mixture of classroom teaching and naturalistic exposure (2 years at University + 1 year abroad). An analysis of relative frequencies leads him to suggest a continuum of use going from futurate present to conditional with past hypothetic conditional clauses in si, which needs to be confirmed by further studies. Acknowledgements The present volume was inspired by the conference Acquisition of Tense – Aspect – Mood in First and Second Language held on 9th and 10th February 2008 at Aston University (Birmingham, UK) where over 40 delegates from four continents and over a dozen countries met for lively and enjoyable discussions. This collection of papers was double peer-reviewed by an international scientific committee made of Kathleen Bardovi-Harlig (Indiana University), Christine Bozier (Lund Universitet), Alex Housen (Vrije Universiteit Brussel), Martin Howard (University College Cork), Florence Myles (Newcastle University), Urszula Paprocka (Catholic University of Lublin), †Clive Perdue (Université Paris 8), Michel Pierrard (Vrije Universiteit Brussel), Rafael Salaberry (University of Texas at Austin), Suzanne Schlyter (Lund Universitet), Richard Towell (Salford University), and Daniel Véronique (Université d’Aix-en-Provence). We are very much indebted to that scientific committee for their insightful input at each step of the project. We are also thankful for the financial support of the Association for French Language Studies through its workshop grant, and to the Aston Modern Languages Research Foundation for funding the proofreading of the manuscript.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis contributes to the evolving process of moving the study of Complexity from the arena of metaphor to something real and operational. Acknowledging this phenomenon ultimately changes the underlying assumptions made about working environments and leadership; organisations are dynamic and so should their leaders be. Dynamic leaders are behaviourally complex. Behavioural Complexity is a product of behavioural repertoire - range of behaviours; and behavioural differentiation - where effective leaders apply appropriate behaviour to the demands of the situation. Behavioural Complexity was operationalised using the Competing Values Framework (CVF). The CVF is a measure that captures the extent to which leaders demonstrate four behaviours on four quadrants: Control, Compete, Collaborate and Create, which are argued to be critical to all types of organisational leadership. The results provide evidence to suggest Behavioural Complexity is an enabler of leadership effectiveness; Organisational Complexity (captured using a new measure developed in the thesis) moderates Behavioural Complexity and leadership effectiveness; and leadership training supports Behavioural Complexity in contributing to leadership effectiveness. Most definitions of leadership come down to changing people’s behaviour. Such definitions have contributed to a popularity of focus in leadership research intent on exploring how to elicit change in others when maybe some of the popularity of attention should have been on eliciting change in the leader them self. It is hoped that this research will provoke interest into the factors that cause behavioural change in leaders that in turn enable leadership effectiveness and in doing so contribute to a better understanding of leadership in organisations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The computer simulation of manufacturing systems is commonly carried out using discrete event simulation (DES). Indeed, there appears to be a lack of applications of continuous simulation methods, particularly system dynamics (SD), despite evidence that this technique is suitable for industrial modelling. This paper investigates whether this is due to a decline in the general popularity of SD, or whether modelling of manufacturing systems represents a missed opportunity for SD. On this basis, the paper first gives a review of the concept of SD and fully describes the modelling technique. Following on, a survey of the published applications of SD in the 1990s is made by developing and using a structured classification approach. From this review, observations are made about the application of the SD method and opportunities for future research are suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magnetoencephalography (MEG), a non-invasive technique for characterizing brain electrical activity, is gaining popularity as a tool for assessing group-level differences between experimental conditions. One method for assessing task-condition effects involves beamforming, where a weighted sum of field measurements is used to tune activity on a voxel-by-voxel basis. However, this method has been shown to produce inhomogeneous smoothness differences as a function of signal-to-noise across a volumetric image, which can then produce false positives at the group level. Here we describe a novel method for group-level analysis with MEG beamformer images that utilizes the peak locations within each participant's volumetric image to assess group-level effects. We compared our peak-clustering algorithm with SnPM using simulated data. We found that our method was immune to artefactual group effects that can arise as a result of inhomogeneous smoothness differences across a volumetric image. We also used our peak-clustering algorithm on experimental data and found that regions were identified that corresponded with task-related regions identified in the literature. These findings suggest that our technique is a robust method for group-level analysis with MEG beamformer images.