140 resultados para DRIVES
Resumo:
Recent years have seen a rapid increase in SMEs working collaboratively in inter-organizational projects. But what drives the emergence of such projects, and what types of industries breed them the most? To address these questions, this paper extends the long running literature on the firm and industry antecedents of new venturing and alliance formation to the domain of project-based organization by SMEs. Based on survey data collected among 1,725 small and medium sized organizations and longitudinal industry data, we find an overall pattern that indicates that IOPV participation is primarily determined by a focal SME’s scope of innovative activities, and the munificence, dynamism and complexity of its environment. Unexpectedly, these variables have different effects on whether SMEs are likely to engage in IOPVs, compared to with how many there are in their portfolio at a time. Implications for theory development are discussed.
Resumo:
This chapter proposes a conceptual model for optimal development of needed capabilities for the contemporary knowledge economy. We commence by outlining key capability requirements of the 21st century knowledge economy, distinguishing these from those suited to the earlier stages of the knowledge economy. We then discuss the extent to which higher education currently caters to these requirements and then put forward a new model for effective knowledge economy capability learning. The core of this model is the development of an adaptive and adaptable career identity, which is created through a reflective process of career self-management, drawing upon data from the self and the world of work. In turn, career identity drives the individual’s process of skill and knowledge acquisition, including deep disciplinary knowledge. The professional capability learning thus acquired includes disciplinary skill and knowledge sets, generic skills, and also skills for the knowledge economy, including disciplinary agility, social network capability, and enterprise skills. In the final part of this chapter, we envision higher education systems that embrace the model, and suggest steps that could be taken toward making the development of knowledge economy capabilities an integral part of the university experience.
Resumo:
This paper investigates the role of cultural factors as possible partial explanation of the disparity in terms of project management deployment observed between various studied countries. The topic of culture has received increasing attention in the management literature in general during the last decades and in the project management literature in particular during the last few years. The globalization of businesses and worldwide Governmental/International organizations collaborations drives this interest in the national culture to increase more and more. Based on Hofstede national culture framework, the study hypothesizes and tests the impact of the culture and development of the country on the PM deployment. Seventy-four countries are selected to conduct a correlation and regression analysis between Hofstede’s national culture dimensions and the used PM deployment indicator. The results show the relations between various national culture dimensions and development indicator (GDP/Capita) on the project management deployment levels of the considered countries.
Resumo:
The current paper compares and investigates the discrepancies in motivational drives of project team members with respect to their project environment in collocated and distributed (virtual) project teams. The set of factors, which in this context are called ‘Sense of Ownership’, is used as a scale to measure these discrepancies using one tailed t tests. These factors are abstracted from theories of motivation, team performance, and team effectiveness and are related to ‘Nature of Work’, ‘Rewards’, and ‘Communication’. It has been observed that ‘virtual ness’ does not seem to impact the motivational drives of the project team members or the way the project environments provide or support those motivational drives in collocated and distributed projects. At a more specific level in terms of the motivational drives of the project team (‘WANT’) and the ability of the project environment to provide or support those factors (‘GET’), in collocated project teams, significant discrepancies were observed with respect to financial and non financial rewards, learning opportunities, nature of work and project specific communication, while in distributed teams, significant discrepancies with respect to project centric communication, followed by financial rewards and nature of work. Further, distributed project environments seem to better support the team member motivation than collocated project environments. The study concludes that both the collocated and distributed project environments may not be adequately supporting the motivational drives of its project team members, which may be frustrating to them. However, members working in virtual team environments may be less frustrated than their collocated counterparts as virtual project environments are better aligned with the motivational drives of their team members vis-à-vis the collocated project environments.
Resumo:
The current paper compares and investigates the discrepancies in motivational drives of project team members with respect to their project environment in collocated and distributed (virtual) project teams. The set of factors, which in this context are called ‘Sense of Ownership’, is used as a scale to measure these discrepancies using one tailed t tests. These factors are abstracted from theories of motivation, team performance, and team effectiveness and are related to ‘Nature of Work’, ‘Rewards’, and ‘Communication’. It has been observed that ‘virtualness’ does not seem to impact the motivational drives of the project team members or the way the project environments provide or support those motivational drives in collocated and distributed projects. At a more specific level in terms of the motivational drives of the project team (‘WANT’) and the ability of the project environment to provide or support those factors (‘GET’), in collocated project teams, significant discrepancies were observed with respect to financial and non financial rewards, learning opportunities, nature of work and project specific communication, while in distributed teams, significant discrepancies with respect to project centric communication, followed by financial rewards and nature of work. Further, distributed project environments seem to better support the team member motivation than collocated project environments. The study concludes that both the collocated and distributed project environments may not be adequately supporting the motivational drives of its project team members, which may be frustrating to them. However, members working in virtual team environments may be less frustrated than their collocated counterparts as virtual project environments are better aligned with the motivational drives of their team members vis-à-vis the collocated project environments.
Resumo:
This paper investigates the role of cultural factors as possible partial explanation of the disparity in terms of Project Management Deployment observed between various studied countries. The topic of culture has received increasing attention in the management literature in general during the last decades and in the Project Management literature in particular during the last few years. The globalization of businesses and worldwide Governmental / International organizations collaborations drives this interest in the national culture to increase more and more. Based on Hofstede national culture framework, the study hypothesizes and tests the impact of the culture and development of the country on the PM Deployment. 74 countries are selected to conduct a correlation and regression analysis between Hofstede’s national culture dimensions and the used PM Deployment indicator. The results show the relations between various national culture dimensions and development indicator (GDP/Capita) on the Project Management Deployment levels of the considered countries.
Resumo:
The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.
Resumo:
Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes.
Resumo:
This report maps the current state of entrepreneurship in Australia using data from the Global Entrepreneurship Monitor (GEM) for the year 2011. Entrepreneurship is regarded as a crucial driver for economic well-being. Entrepreneurial activity in new and established firms drives innovation and creates jobs. Entrepreneurs also fuel competition thereby contributing indirectly to market and productivity growth along with improving competitiveness of the national economy. Given the economic landscape that exists as a result of the global financial crisis (GFC), it is probably more important than ever for us to understand the effects and drivers of entrepreneurial activity and attitudes in Australia. The central finding of this report is that entrepreneurship is certainly alive and well in Australia. With 10.5 per cent of the adult population involved in setting up a new business or owning a newly founded business as measured by the total entrepreneurial activity rate (TEA) in 2011, Australia ranks second only to the United States among the innovation-driven (developed) economies. Compared with 2010 the TEA rate has increased by 2.7 percentage points. Furthermore, in regard to employee entrepreneurial activity (EEA) rate in established firms, Australia ranks above average. According to GEM data, 5 per cent of the adult population is engaged in developing or launching new products, a new business unit or subsidiary for their employer. Further analysis of the GEM data also clearly shows that Australia compares well with other major economies in terms of the ‘quality’ of entrepreneurial activities being pursued. Indeed, it is not only the quantity of entrepreneurs but also the level of their aspirations and business goals that are important drivers for economic growth. On average, for each business started in Australia driven by the lack of alternatives for the founder to generate income from any other source, there are five other businesses started where the founders specifically want to take advantage of a business opportunity that they believe will increase their personal income or independence. With respect to innovativeness, 31 per cent of Australian new businesses offer products or services which they consider to be new to customers or where very few, or in some cases no, other businesses offer the same product or service. Both these indicators are higher than the average for innovation-driven economies. Somewhat below average is the international orientation of Australian entrepreneurs whereby only 12 per cent aim at having a substantial share of customers from international markets. So what drives this high quantity and quality of entrepreneurship in Australia? The analysis of the data suggests it is a combination of both business opportunities and entrepreneurial skills. It seems that around 50 per cent of the Australian population identify opportunities for a start-up venture and believe that they have the necessary skills to start a business. Furthermore, a large majority of the Australian population report that high media attention for entrepreneurship provides successful role models for prospective entrepreneurs. As a result, 12 per cent of our respondents have expressed the intention to start a business within the next three years. These numbers are all well above average when compared to the other major economies. With regard to gender, the GEM survey shows a high proportion of female entrepreneurs. Approximately 8.4 per cent of adult females are actually involved in setting up a business or have recently done so. Although this female TEA rate is slightly down from 2010, Australia ranks second among the innovation-driven economies. This paints a healthy picture of access to entrepreneurial opportunities for Australian women.
Resumo:
Architecture Post Mortem surveys architecture’s encounter with death, decline, and ruination following late capitalism. As the world moves closer to an economic abyss that many perceive to be the death of capital, contraction and crisis are no longer mere phases of normal market fluctuations, but rather the irruption of the unconscious of ideology itself. Post mortem is that historical moment wherein architecture’s symbolic contract with capital is put on stage, naked to all. Architecture is not irrelevant to fiscal and political contagion as is commonly believed; it is the victim and penetrating analytical agent of the current crisis. As the very apparatus for modernity’s guilt and unfulfilled drives-modernity’s debt-architecture is that ideological element that functions as a master signifier of its own destruction, ordering all other signifiers and modes of signification beneath it. It is under these conditions that architecture theory has retreated to an “Alamo” of history, a final desert outpost where history has been asked to transcend itself. For architecture’s hoped-for utopia always involves an apocalypse. This timely collection of essays reformulates architecture’s relation to modernity via the operational death-drive: architecture is but a passage between life and death. This collection includes essays by Kazi K. Ashraf, David Bertolini, Simone Brott, Peggy Deamer, Didem Ekici, Paul Emmons, Donald Kunze, Todd McGowan, Gevork Hartoonian, Nadir Lahiji, Erika Naginski, and Dennis Maher. Contents: Introduction: ‘the way things are’, Donald Kunze; Driven into the public: the psychic constitution of space, Todd McGowan; Dead or alive in Joburg, Simone Brott; Building in-between the two deaths: a post mortem manifesto, Nadir Lahiji; Kant, Sade, ethics and architecture, David Bertolini; Post mortem: building deconstruction, Kazi K. Ashraf; The slow-fast architecture of love in the ruins, Donald Kunze; Progress: re-building the ruins of architecture, Gevork Hartoonian; Adrian Stokes: surface suicide, Peggy Deamer; A window to the soul: depth in the early modern section drawing, Paul Emmons; Preliminary thoughts on Piranesi and Vico, Erika Naginski; architectural asceticism and austerity, Didem Ekici; 900 miles to Paradise, and other afterlives of architecture, Dennis Maher; Index.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
A firm, as a dynamic, evolving, and quasi-autonomous system of knowledge production and application, develops knowledge management capability (KMC) through strategic learning in order to sustain competitive advantages in a dynamic environment. Knowledge governance mechanisms and knowledge processes connect and interact with each other forming learning mechanisms, which carry out double loop learning that drives genesis and evolution of KMC to modify operating routines that effect desired performance. This paper reports a study that was carried out within a context of construction contractors, a type of project-based firms, operating within the dynamic Hong Kong construction market. A multiple-case design was used to incorporate evidence from the literature and interviews, with the help of system dynamics modeling, to visualize the evolution of KMC. The study demonstrates the feasibility to visualize how a firm's KMC matches its operating environment over time. The findings imply that knowledge management (KM) applications can be better planned and controlled through evaluation of KM performance over time from a capability perspective.
Resumo:
At present, for mechanical power transmission, Cycloidal drives are most preferred - for compact, high transmission ratio speed reduction, especially for robot joints and manipulator applications. Research on drive-train dynamics of Cycloidal drives is not well-established. This paper presents a testing rig for Cycloidal drives, which would produce data for development of mathematical models and investigation of drive-train dynamics, further aiding in optimising its design
Resumo:
Australian universities are currently engaging with new governmental policies and regulations that require them to demonstrate enhanced quality and accountability in teaching and research. The development of national academic standards for learning outcomes in higher education is one such instance of this drive for excellence. These discipline-specific standards articulate the minimum, or Threshold Learning Outcomes, to be addressed by higher education institutions so that graduating students can demonstrate their achievement to their institutions, accreditation agencies, and industry recruiters. This impacts not only on the design of Engineering courses (with particular emphasis on pedagogy and assessment), but also on the preparation of academics to engage with these standards and implement them in their day-to-day teaching practice on a micro level. This imperative for enhanced quality and accountability in teaching is also significant at a meso level, for according to the Australian Bureau of Statistics, about 25 per cent of teachers in Australian universities are aged 55 and above and more than 54 per cent are aged 45 and above (ABS, 2006). A number of institutions have undertaken recruitment drives to regenerate and enrich their academic workforce by appointing capacity-building research professors and increasing the numbers of early- and mid-career academics. This nationally driven agenda for quality and accountability in teaching permeates also the micro level of engineering education, since the demand for enhanced academic standards and learning outcomes requires both a strong advocacy for a shift to an authentic, collaborative, outcomes-focused education and the mechanisms to support academics in transforming their professional thinking and practice. Outcomes-focused education means giving greater attention to the ways in which the curriculum design, pedagogy, assessment approaches and teaching activities can most effectively make a positive, verifiable difference to students’ learning. Such education is authentic when it is couched firmly in the realities of learning environments, student and academic staff characteristics, and trustworthy educational research. That education will be richer and more efficient when staff works collaboratively, contributing their knowledge, experience and skills to achieve learning outcomes based on agreed objectives. We know that the school or departmental levels of universities are the most effective loci of changes in approaches to teaching and learning practices in higher education (Knight & Trowler, 2000). Heads of Schools are being increasingly entrusted with more responsibilities - in addition to setting strategic directions and managing the operational and sometimes financial aspects of their school, they are also expected to lead the development and delivery of the teaching, research and other academic activities. Guiding and mentoring individuals and groups of academics is one critical aspect of the Head of School’s role. Yet they do not always have the resources or support to help them mentor staff, especially the more junior academics. In summary, the international trend in undergraduate engineering course accreditation towards the demonstration of attainment of graduate attributes poses new challenges in addressing academic staff development needs and the assessment of learning. This paper will give some insights into the conceptual design, implementation and empirical effectiveness to date, of a Fellow-In-Residence Engagement (FIRE) program. The program is proposed as a model for achieving better engagement of academics with contemporary issues and effectively enhancing their teaching and assessment practices. It will also report on the program’s collaborative approach to working with Heads of Schools to better support academics, especially early-career ones, by utilizing formal and informal mentoring. Further, the paper will discuss possible factors that may assist the achievement of the intended outcomes of such a model, and will examine its contributions to engendering an outcomes-focussed thinking in engineering education.
Resumo:
Purpose – The aim of the paper is to describe and explain, using a combination of interviews and content analysis, the social and environmental reporting practices of a major garment export organisation within a developing country. Design/methodology/approach – Senior executives from a major organisation in Bangladesh are interviewed to determine the pressures being exerted on them in terms of their social and environmental performance. The perceptions of pressures are then used to explain – via content analysis – changing social and environmental disclosure practices. Findings – The results show that particular stakeholder groups have, since the early 1990s, placed pressure on the Bangladeshi clothing industry in terms of its social performance. This pressure, which is also directly related to the expectations of the global community, in turn drives the industry's social policies and related disclosure practices. Research limitations/implications – The findings show that, within the context of a developing country, unless we consider the managers' perceptions about the social and environmental expectations being imposed upon them by powerful stakeholder groups then we will be unable to understand organisational disclosure practices. Originality/value – This paper is the first known paper to interview managers from a large organisation in a developing country about changing stakeholder expectations and then link these changing expectations to annual report disclosures across an extended period of analysis.