963 resultados para authors
Resumo:
Here we search for evidence of the existence of a sub-chondritic 142Nd/144Nd reservoir that balances the Nd isotope chemistry of the Earth relative to chondrites. If present, it may reside in the source region of deeply sourced mantle plume material. We suggest that lavas from Hawai’i with coupled elevations in 186Os/188Os and 187Os/188Os, from Iceland that represent mixing of upper mantle and lower mantle components, and from Gough with sub-chondritic 143Nd/144Nd and high 207Pb/206Pb, are favorable samples that could reflect mantle sources that have interacted with an Early-Enriched Reservoir (EER) with sub-chondritic 142Nd/144Nd. High-precision Nd isotope analyses of basalts from Hawai’i, Iceland and Gough demonstrate no discernable 142Nd/144Nd deviation from terrestrial standards. These data are consistent with previous high-precision Nd isotope analysis of recent mantle-derived samples and demonstrate that no mantle-derived material to date provides evidence for the existence of an EER in the mantle. We then evaluate mass balance in the Earth with respect to both 142Nd/144Nd and 143Nd/144Nd. The Nd isotope systematics of EERs are modeled for different sizes and timing of formation relative to ε143Nd estimates of the reservoirs in the μ142Nd = 0 Earth, where μ142Nd is ((measured 142Nd/144Nd/terrestrial standard 142Nd/144Nd)−1 * 10−6) and the μ142Nd = 0 Earth is the proportion of the silicate Earth with 142Nd/144Nd indistinguishable from the terrestrial standard. The models indicate that it is not possible to balance the Earth with respect to both 142Nd/144Nd and 143Nd/144Nd unless the μ142Nd = 0 Earth has a ε143Nd within error of the present-day Depleted Mid-ocean ridge basalt Mantle source (DMM). The 4567 Myr age 142Nd–143Nd isochron for the Earth intersects μ142Nd = 0 at ε143Nd of +8 ± 2 providing a minimum ε143Nd for the μ142Nd = 0 Earth. The high ε143Nd of the μ142Nd = 0 Earth is confirmed by the Nd isotope systematics of Archean mantle-derived rocks that consistently have positive ε143Nd. If the EER formed early after solar system formation (0–70 Ma) continental crust and DMM can be complementary reservoirs with respect to Nd isotopes, with no requirement for significant additional reservoirs. If the EER formed after 70 Ma then the μ142Nd = 0 Earth must have a bulk ε143Nd more radiogenic than DMM and additional high ε143Nd material is required to balance the Nd isotope systematics of the Earth.
Resumo:
When complex projects go wrong they can go horribly wrong with severe financial consequences. We are undertaking research to develop leading performance indicators for complex projects, metrics to provide early warning of potential difficulties. The assessment of success of complex projects can be made by a range of stakeholders over different time scales, against different levels of project results: the project’s outputs at the end of the project; the project’s outcomes in the months following project completion; and the project’s impact in the years following completion. We aim to identify leading performance indicators, which may include both success criteria and success factors, and which can be measured by the project team during project delivery to forecast success as assessed by key stakeholders in the days, months and years following the project. The hope is the leading performance indicators will act as alarm bells to show if a project is diverting from plan so early corrective action can be taken. It may be that different combinations of the leading performance indicators will be appropriate depending on the nature of project complexity. In this paper we develop a new model of project success, whereby success is assessed by different stakeholders over different time frames against different levels of project results. We then relate this to measurements that can be taken during project delivery. A methodology is described to evaluate the early parts of this model. Its implications and limitations are described. This paper describes work in progress.
Resumo:
There is increasing agreement that understanding complexity is important for project management because of difficulties associated with decision-making and goal attainment which appear to stem from complexity. However the current operational definitions of complex projects, based upon size and budget, have been challenged and questions have been raised about how complexity can be measured in a robust manner that takes account of structural, dynamic and interaction elements. Thematic analysis of data from 25 in-depth interviews of project managers involved with complex projects, together with an exploration of the literature reveals a wide range of factors that may contribute to project complexity. We argue that these factors contributing to project complexity may define in terms of dimensions, or source characteristics, which are in turn subject to a range of severity factors. In addition to investigating definitions and models of complexity from the literature and in the field, this study also explores the problematic issues of ‘measuring’ or assessing complexity. A research agenda is proposed to further the investigation of phenomena reported in this initial study.
Resumo:
The increasing prevalence of International New Ventures (INVs) during the past twenty years has been highlighted by numerous studies (Knight and Cavusgil, 1996, Moen, 2002). International New Ventures are firms, typically small to medium enterprises, that internationalise within six years of inception (Oviatt and McDougall, 1997). To date there has been no general consensus within the literature on a theoretical framework of internationalisation to explain the internationalisation process of INVs (Madsen and Servais, 1997). However, some researchers have suggested that the innovation diffusion model may provide a suitable theoretical framework (Chetty & Hamilton, 1996, Fan & Phan, 2007).The proposed model was based on the existing and well-established innovation diffusion theories drawn from consumer behaviour and internationalisation literature to explain the internationalisation process of INVs (Lim, Sharkey, and Kim, 1991, Reid, 1981, Robertson, 1971, Rogers, 1962, Wickramasekera and Oczkowski, 2006). The results of this analysis indicated that the synthesied model of export adoption was effective in explaining the internationalisation process of INVs within the Queensland Food and Beverage Industry. Significantly the results of the analysis also indicated that features of the original I-models developed in the consumer behaviour literature, that had limited examination within the internationalisation literature were confirmed. This includes the ability of firms, or specifically decision-makers, to skip stages based om previous experience.
Resumo:
The proliferation of innovative schemes to address climate change at international, national and local levels signals a fundamental shift in the priority and role of the natural environment to society, organizations and individuals. This shift in shared priorities invites academics and practitioners to consider the role of institutions in shaping and constraining responses to climate change at multiple levels of organisations and society. Institutional theory provides an approach to conceptualising and addressing climate change challenges by focusing on the central logics that guide society, organizations and individuals and their material and symbolic relationship to the environment. For example, framing a response to climate change in the form of an emission trading scheme evidences a practice informed by a capitalist market logic (Friedland and Alford 1991). However, not all responses need necessarily align with a market logic. Indeed, Thornton (2004) identifies six broad societal sectors each with its own logic (markets, corporations, professions, states, families, religions). Hence, understanding the logics that underpin successful –and unsuccessful– climate change initiatives contributes to revealing how institutions shape and constrain practices, and provides valuable insights for policy makers and organizations. This paper develops models and propositions to consider the construction of, and challenges to, climate change initiatives based on institutional logics (Thornton and Ocasio 2008). We propose that the challenge of understanding and explaining how climate change initiatives are successfully adopted be examined in terms of their institutional logics, and how these logics evolve over time. To achieve this, a multi-level framework of analysis that encompasses society, organizations and individuals is necessary (Friedland and Alford 1991). However, to date most extant studies of institutional logics have tended to emphasize one level over the others (Thornton and Ocasio 2008: 104). In addition, existing studies related to climate change initiatives have largely been descriptive (e.g. Braun 2008) or prescriptive (e.g. Boiral 2006) in terms of the suitability of particular practices. This paper contributes to the literature on logics by examining multiple levels: the proliferation of the climate change agenda provides a site in which to study how institutional logics are played out across multiple, yet embedded levels within society through institutional forums in which change takes place. Secondly, the paper specifically examines how institutional logics provide society with organising principles –material practices and symbolic constructions– which enable and constrain their actions and help define their motives and identity. Based on this model, we develop a series of propositions of the conditions required for the successful introduction of climate change initiatives. The paper proceeds as follows. We present a review of literature related to institutional logics and develop a generic model of the process of the operation of institutional logics. We then consider how this is applied to key initiatives related to climate change. Finally, we develop a series of propositions which might guide insights into the successful implementation of climate change practices.
Resumo:
Structural health monitoring (SHM) is the term applied to the procedure of monitoring a structure’s performance, assessing its condition and carrying out appropriate retrofitting so that it performs reliably, safely and efficiently. Bridges form an important part of a nation’s infrastructure. They deteriorate due to age and changing load patterns and hence early detection of damage helps in prolonging the lives and preventing catastrophic failures. Monitoring of bridges has been traditionally done by means of visual inspection. With recent developments in sensor technology and availability of advanced computing resources, newer techniques have emerged for SHM. Acoustic emission (AE) is one such technology that is attracting attention of engineers and researchers all around the world. This paper discusses the use of AE technology in health monitoring of bridge structures, with a special focus on analysis of recorded data. AE waves are stress waves generated by mechanical deformation of material and can be recorded by means of sensors attached to the surface of the structure. Analysis of the AE signals provides vital information regarding the nature of the source of emission. Signal processing of the AE waveform data can be carried out in several ways and is predominantly based on time and frequency domains. Short time Fourier transform and wavelet analysis have proved to be superior alternatives to traditional frequency based analysis in extracting information from recorded waveform. Some of the preliminary results of the application of these analysis tools in signal processing of recorded AE data will be presented in this paper.
Resumo:
In condition-based maintenance (CBM), effective diagnostics and prognostics are essential tools for maintenance engineers to identify imminent fault and to predict the remaining useful life before the components finally fail. This enables remedial actions to be taken in advance and reschedules production if necessary. This paper presents a technique for accurate assessment of the remnant life of machines based on historical failure knowledge embedded in the closed loop diagnostic and prognostic system. The technique uses the Support Vector Machine (SVM) classifier for both fault diagnosis and evaluation of health stages of machine degradation. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for multi-class fault diagnosis. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
Crash risk is the statistical probability of a crash. Its assessment can be performed through ex post statistical analysis or in real-time with on-vehicle systems. These systems can be cooperative. Cooperative Vehicle-Infrastructure Systems (CVIS) are a developing research avenue in the automotive industry worldwide. This paper provides a survey of existing CVIS systems and methods to assess crash risk with them. It describes the advantages of cooperative systems versus non-cooperative systems. A sample of cooperative crash risk assessment systems is analysed to extract vulnerabilities according to three criteria: market penetration, over-reliance on GPS and broadcasting issues. It shows that cooperative risk assessment systems are still in their infancy and requires further development to provide their full benefits to road users.
Networks in the shadow of markets and hierarchies : calling the shots in the visual effects industry
Resumo:
The nature and organisation of creative industries and the creative economy has received increased attention in recent academic and policy literatures (Florida 2002; Grabher 2002; Scott 2006a). Constituted as one variant on new economy narratives, creativity, alongside knowledge, has been presented as a key competitive asset, Such industries – ranging from advertising, to film and new media – are seen as not merely expanding their scale and scope, but as leading edge proponents of a more general trend towards new forms of organization and economic coordination (Davis and Scase 2000). The idea of network forms (and the consequent displacement of markets and hierarchies) has been at the heart of attempts to differentiate the field economically and spatially. Across both the discussion of production models and work/employment relations is the assertion of the enhanced importance of trust and non-market relations in coordinating structures and practices. This reflects an influential view in sociological, management, geography and other literatures that social life is ‘intrinsically networked’ (Sunley 2008: 12) and that we can confidently use the term ‘network society’ to describe contemporary structures and practices (Castells 1996). Our paper is sceptical of the conceptual and empirical foundations of such arguments. We draw on a number of theoretical resources, including institutional theory, global value chain analysis and labour process theory (see Smith and McKinlay 2009) to explore how a more realistic and grounded analysis of the nature of and limits to networks can be articulated. Given space constraints, we cannot address all the dimensions of network arguments or evidence. Our focus is on inter and intra-firm relations and draws on research into a particular creative industry – visual effects – that is a relatively new though increasingly important global production network. Through this examination a different model of the creative industries and creative work emerges – one in which market rules and patterns of hierarchical interaction structure the behaviour of economic actors and remain a central focus of analysis. The next section outlines and unpacks in more detail arguments concerning the role and significance of networks, markets and hierarchies in production models and work organisation in creative industries and the ‘creative economy’.
Resumo:
The topic of library and information science (LIS) education has been under the spotlight in the professional literature in Australia and New Zealand for a number of years. Critical issues of discussion encompass the apparent lack of a core curriculum for the discipline, the perceived gulf between LIS education and LIS practice, and the pressing need for career-long learning and development. One of the central points of debate that emerges repeatedly is the long-standing question about the positioning of the profession: Is LIS a graduate profession of highly skilled individuals valued for their expertise and professionalism or is it a profession of anyone who works in a library, regardless of their qualifications (LIANZA, 2005)? While Australia and New Zealand do not stand alone in this debate – similar issues are echoed in many other countries – there are inevitably some local characteristics which warrant exploration. The discussion presented here highlights the historical background to professional training, the specific professional policies and standards that guide LIS education and some of the challenges facing professional and paraprofessional education, given the changing environment of education in Australia as a whole, with some comparisons made with the New Zealand situation. While all too often library practitioners point the finger at the library educators to ‘right the wrongs’, the authors wish to reinforce the idea that the future of effective and relevant LIS education is a matter for all stakeholders in the profession: practitioners and educators, students and staff, employers and employees, with cohesion potentially offered by the professional body.
Resumo:
Growing participation is a key challenge for the viability of sustainability initiatives, many of which require enactment at a local community level in order to be effective. This paper undertakes a review of technology assisted carpooling in order to understand the challenge of designing participation and consider how mobile social software and interface design can be brought to bear. It was found that while persuasive technology and social networking approaches have roles to play, critical factors in the design of carpooling are convenience, ease of use and fit with contingent circumstances, all of which require a use-centred approach to designing a technological system and building participation. Moreover, the reach of technology platform-based global approaches may be limited if they do not cater to local needs. An approach that focuses on iteratively designing technology to support and grow mobile social ridesharing networks in particular locales is proposed. The paper contributes an understanding of HCI approaches in the context of other designing participation approaches.
Resumo:
In-place digital augmentation enhances the experience of physical spaces through digital technologies that are directly accessible within that space. This can take place in many forms and ways, e.g., through location-aware applications running on the individuals’ portable devices, such as smart phones, or through large static devices, such as public displays, which are located within the augmented space and accessible by everyone. The hypothesis of this study is that in-place digital augmentation, in the context of civic participation, where citizens collaboratively aim at making their community or city a better place, offers significant new benefits, because it allows access to services or information that are currently inaccessible to urban dwellers where and when they are needed: in place. This paper describes our work in progress deploying a public screen to promote civic issues in public, urban spaces, and to encourage public feedback and discourse via mobile phones.
Resumo:
The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create drowsiness or hypovigilance and impair the ability to react to critical events. Identifying vigilance decrement in monotonous conditions has been a major subject of research, but no research to date has attempted to predict this vigilance decrement. This pilot study aims to show that vigilance decrements due to monotonous tasks can be predicted through mathematical modelling. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants’ performance. This task models the driver’s ability to cope with unpredicted events by performing the expected action. A Hidden Markov Model (HMM) is proposed to predict participants’ hypovigilance. Driver’s vigilance evolution is modelled as a hidden state and is correlated to an observable variable: the participant’s reactions time. This experiment shows that the monotony of the task can lead to an important vigilance decline in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.
Resumo:
The Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE) is the largest study of new firm formation that has ever been undertaken in Australia. CAUSEE follows the development of several samples of new and emerging firms over time. In this report we focus on the drivers of outcomes – in terms of reaching an operational stage vs. terminating the effort – of 493 randomly selected nascent firms whose founders have been comprehensively interviewed on two occasions, 12 months apart. We investigate the outcome effects of three groups of variables: Characteristics of the Venture; Resources Used in the Start-Up Process and Characteristics of the Start-Up Process Itself.
Resumo:
This paper discusses two different approaches to teaching design and their modes of delivery and reflects upon their successes and failures. Two small groups of third year design students have been given projects focussing on incorporation of daylighting to architectural design in studios having different design themes. In association with the curriculum, the themes were Digital Tools and Sustainability. Although both studios had the topic of daylighting, the aim and methodology used were different. Digital Tool studio’s aim was to teach how to design daylighting by using a digital tool, where as, Sustainability studio aimed at using scale modelling as a tool to learn about daylighting and integrating it into design. Positive results for providing student learning success within the University context were the students’ chance to learn and practice some new skills –using a new tool for designing; integration of the tutors’ extensive research expertise to their teaching practice; and the students’ construction of their own understanding of knowledge in a student-centred educational environment. This environment created a very positive attitude in the form of exchanging ideas and collaboration among the students of Digital Tools students at the discussion forum. Sustainability group students were enthusiastic about designing and testing various proposals. Problems that both studios experienced were mainly related to timing. Synchronizing with other groups of their studios and learning of a new skill on top of an already complicated process of design learning were the setbacks.