594 resultados para Contingency
Resumo:
Building Information Modelling (BIM) is an information technology [IT] enabled approach to managing design data in the AEC/FM (Architecture, Engineering and Construction/ Facilities Management) industry. BIM enables improved interdisciplinary collaboration across distributed teams, intelligent documentation and information retrieval, greater consistency in building data, better conflict detection and enhanced facilities management. Despite the apparent benefits the adoption of BIM in practice has been slow. Workshops with industry focus groups were conducted to identify the industry needs, concerns and expectations from participants who had implemented BIM or were BIM “ready”. Factors inhibiting BIM adoption include lack of training, low business incentives, perception of lack of rewards, technological concerns, industry fragmentation related to uneven ICT adoption practices, contractual matters and resistance to changing current work practice. Successful BIM usage depends on collective adoption of BIM across the different disciplines and support by the client. The relationship of current work practices to future BIM scenarios was identified as an important strategy as the participants believed that BIM cannot be efficiently used with traditional practices and methods. The key to successful implementation is to explore the extent to which current work practices must change. Currently there is a perception that all work practices and processes must adopt and change for effective usage of BIM. It is acknowledged that new roles and responsibilities are emerging and that different parties will lead BIM on different projects. A contingency based approach to the problem of implementation was taken which relies upon integration of BIM project champion, procurement strategy, team capability analysis, commercial software availability/applicability and phase decision making and event analysis. Organizations need to understand: (a) their own work processes and requirements; (b) the range of BIM applications available in the market and their capabilities (c) the potential benefits of different BIM applications and their roles in different phases of the project lifecycle, and (d) collective supply chain adoption capabilities. A framework is proposed to support organizations selection of BIM usage strategies that meet their project requirements. Case studies are being conducted to develop the framework. The results of the preliminary design management case study is presented for contractor led BIM specific to the design and construct procurement strategy.
Resumo:
This thesis proposes that contemporary printmaking, at its most significant, marks the present through reconstructing pasts and anticipating futures. It argues this through examples in the field, occurring in contexts beyond the Euramerican (Europe and North America). The arguments revolve around how the practice of a number of significant artists in Japan, Australia and Thailand has generated conceptual and formal innovations in printmaking that transcend local histories and conventions, whilst paradoxically, also building upon them and creating new meanings. The arguments do not portray the relations between contemporary and traditional art as necessarily antagonistic but rather, as productively dialectical. Furthermore, the case studies demonstrate that, in the 1980s and 1990s particularly, the studio practice of these printmakers was informed by other visual arts disciplines and reflected postmodern concerns. Departures from convention witnessed in these countries within the Asia-Pacific region shifted the field of the print into a heterogeneous and hybrid realm. The practitioners concerned (especially in Thailand) produced work that was more readily equated with performance and installation art than with printmaking per se. In Japan, the incursion of photography interrupted the decorative cast of printmaking and delivered it from a straightforward, craft-based aesthetic. In Australia, fixed notions of national identity were challenged by print practitioners through deliberate cultural rapprochements and technical contradictions (speaking across old and new languages).However time-honoured print methods were not jettisoned by any case study artists. Their re-alignment of the fundamental attributes of printmaking, in line with materialist formalism, is a core consideration of my arguments. The artists selected for in-depth analysis from these three countries are all innovators whose geographical circumstances and creative praxis drew on local traditions whilst absorbing international trends. In their radical revisionism, they acknowledged the specificity of history and place, conditions of contingency and forces of globalisation. The transformational nature of their work during the late twentieth century connects it to the postmodern ethos and to a broader artistic and cultural nexus than has hitherto been recognised in literature on the print. Emerging from former guild-based practices, they ambitiously conceived their work to be part of a continually evolving visual arts vocabulary. I argue in this thesis that artists from the Asia-Pacific region have historically broken with the hermetic and Euramerican focus that has generally characterised the field. Inadequate documentation and access to print activity outside the dominant centres of critical discourse imply that readings of postmodernism have been too limited in their scope of inquiry. Other locations offer complexities of artistic practice where re-alignments of customary boundaries are often the norm. By addressing innovative activity in Japan, Australia and Thailand, this thesis exposes the need for a more inclusive theoretical framework and wider global reach than currently exists for ‘printmaking’.
Resumo:
The aim of this research is to examine the changing nature of risks that face journalists and media workers in the world's difficult, remote and hostile environments, and consider the 'adequacy' of managing hostile environment safety courses that some media organizations require prior to foreign assignments. The study utilizes several creative works and contributions to this area of analysis, which includes a documentary film production, course contributions, an emergency reference handbook, security and incident management reviews and a template for evacuation and contingency planning. The research acknowledges that employers have a 'duty of care' to personnel working in these environments, identifies the necessity for pre-deployment training and support, and provides a solution for organizations that wish to initiate a comprehensive framework to advise, monitor, protect and respond to incidents. Finally, it explores the possible development of a unique and holistic service to facilitate proactive and responsive support, in the form of a new profession of 'Editorial Logistics Officer' or 'Editorial Safety Officer' within media organizations. This area of research is vitally important to the profession, and the intended contribution is to introduce a simple and cost-efficient framework for media organizations that desire to implement pre-deployment training and field-support – as these programs save lives. The complete proactive and responsive services may be several years from implementation. However, this study demonstrates that the facilitation of Managing Hostile Environment (MHE) courses should be the minimum professional standard. These courses have saved lives in the past and they provide journalists with the tools to "cover the story, and not become the story."
Resumo:
In a power network, when a propagation energy wave caused by a disturbance hits a weak link, a reflection is appeared and some of energy is transferred across the link. In this work, an analytical descriptive methodology is proposed to study the dynamical stability of a large scale power system. For this purpose, the measured electrical indices (angle, or voltage/frequency) following a fault in different points among the network are used, and the behaviors of the propagated waves through the lines, nodes and buses are studied. This work addresses a new tool for power system stability analysis based on a descriptive study of electrical measurements. The proposed methodology is also useful to detect the contingency condition and synthesis of an effective emergency control scheme.
Resumo:
With the increasing complexity of modern day threats and the growing sophistication of interlinked and interdependent operating environments, Business Continuity Management (BCM) has emerged as a new discipline, offering a strategic approach to safeguarding organisational functions. Of significant interest is the application of BCM frameworks and strategies within critical infrastructure, and in particular the aviation industry. Given the increased focus on security and safety for critical infrastructures, research into the adoption of BCM principles within an airport environment provides valuable management outcomes and research into a previously neglected area of inquisition. This research has used a single case study methodology to identify possible impediments to BCM adoption and implementation by the Brisbane Airport Corporation (BAC). It has identified a number of misalignments between the required breadth of focus for a BCM program, identified differing views on specific roles and responsibilities required during a major disruptive event and illustrated the complexities of the Brisbane Airport which impede the understanding and implementation of effective Business Continuity Management Strategies.
Resumo:
This thesis is a problematisation of the teaching of art to young children. To problematise a domain of social endeavour, is, in Michel Foucault's terms, to ask how we come to believe that "something ... can and must be thought" (Foucault, 1985:7). The aim is to document what counts (i.e., what is sayable, thinkable, feelable) as proper art teaching in Queensland at this point ofhistorical time. In this sense, the thesis is a departure from more recognisable research on 'more effective' teaching, including critical studies of art teaching and early childhood teaching. It treats 'good teaching' as an effect of moral training made possible through disciplinary discourses organised around certain epistemic rules at a particular place and time. There are four key tasks accomplished within the thesis. The first is to describe an event which is not easily resolved by means of orthodox theories or explanations, either liberal-humanist or critical ones. The second is to indicate how poststructuralist understandings of the self and social practice enable fresh engagements with uneasy pedagogical moments. What follows this discussion is the documentation of an empirical investigation that was made into texts generated by early childhood teachers, artists and parents about what constitutes 'good practice' in art teaching. Twenty-two participants produced text to tell and re-tell the meaning of 'proper' art education, from different subject positions. Rather than attempting to capture 'typical' representations of art education in the early years, a pool of 'exemplary' teachers, artists and parents were chosen, using "purposeful sampling", and from this pool, three videos were filmed and later discussed by the audience of participants. The fourth aspect of the thesis involves developing a means of analysing these texts in such a way as to allow a 're-description' of the field of art teaching by attempting to foreground the epistemic rules through which such teacher-generated texts come to count as true ie, as propriety in art pedagogy. This analysis drew on Donna Haraway's (1995) understanding of 'ironic' categorisation to hold the tensions within the propositions inside the categories of analysis rather than setting these up as discursive oppositions. The analysis is therefore ironic in the sense that Richard Rorty (1989) understands the term to apply to social scientific research. Three 'ironic' categories were argued to inform the discursive construction of 'proper' art teaching. It is argued that a teacher should (a) Teach without teaching; (b) Manufacture the natural; and (c) Train for creativity. These ironic categories work to undo modernist assumptions about theory/practice gaps and finding a 'balance' between oppositional binary terms. They were produced through a discourse theoretical reading of the texts generated by the participants in the study, texts that these same individuals use as a means of discipline and self-training as they work to teach properly. In arguing the usefulness of such approaches to empirical data analysis, the thesis challenges early childhood research in arts education, in relation to its capacity to deal with ambiguity and to acknowledge contradiction in the work of teachers and in their explanations for what they do. It works as a challenge at a range of levels - at the level of theorising, of method and of analysis. In opening up thinking about normalised categories, and questioning traditional Western philosophy and the grand narratives of early childhood art pedagogy, it makes a space for re-thinking art pedagogy as "a game oftruth and error" (Foucault, 1985). In doing so, it opens up a space for thinking how art education might be otherwise.
Resumo:
Most statistical methods use hypothesis testing. Analysis of variance, regression, discrete choice models, contingency tables, and other analysis methods commonly used in transportation research share hypothesis testing as the means of making inferences about the population of interest. Despite the fact that hypothesis testing has been a cornerstone of empirical research for many years, various aspects of hypothesis tests commonly are incorrectly applied, misinterpreted, and ignored—by novices and expert researchers alike. On initial glance, hypothesis testing appears straightforward: develop the null and alternative hypotheses, compute the test statistic to compare to a standard distribution, estimate the probability of rejecting the null hypothesis, and then make claims about the importance of the finding. This is an oversimplification of the process of hypothesis testing. Hypothesis testing as applied in empirical research is examined here. The reader is assumed to have a basic knowledge of the role of hypothesis testing in various statistical methods. Through the use of an example, the mechanics of hypothesis testing is first reviewed. Then, five precautions surrounding the use and interpretation of hypothesis tests are developed; examples of each are provided to demonstrate how errors are made, and solutions are identified so similar errors can be avoided. Remedies are provided for common errors, and conclusions are drawn on how to use the results of this paper to improve the conduct of empirical research in transportation.
Resumo:
Il Consiglio di Amministrazione (CdA) è il principale organo di governo delle aziende. La letteratura gli attribuisce tre ruoli: controllo, indirizzo strategico e collegamento con l’ambiente (networking). Precedenti studi empirici hanno analizzato se un Consiglio di Amministrazione è attivo o meno in tutti e tre i ruoli in un dato momento. Nel presente lavoro, invece, si propone un approccio «contingente» e si analizzano i ruoli svolti dal CdA al variare delle condizioni interne (aziende in crisi o di successo) ed esterne (aziende in settori competitivi o regolamentati).. L’indagine empirica è stata condotta su un campione di 301 imprese italiane di grandi dimensioni. I risultati supportano la tesi iniziale secondo cui le condizioni interne ed esterne incidono sul ruolo svolto dal CdA. In particolare i risultati evidenziano che il CdA non svolge sempre tutti e tre i ruoli nello stesso momento, ma esso si concentra sul ruolo o sui ruoli che assumono grande importanza nella situazione in cui si trova l’azienda. Con riferimento alle condizioni interne, nelle imprese in crisi il CdA è attivo in tutti e tre i ruoli, mentre in quelle di successo prevale un orientamento verso la funzione strategica. Nelle aziende che operano in settori competitivi il ruolo di controllo è più pressante mentre nei settori regolamentati prevale una funzione di networking.
Resumo:
One of the major challenges in the design of social technologies is the evaluation of their qualities of use and how they are appropriated over time. While the field of HCI abounds in short-term exploratory design and studies of use, relatively little attention has focused on the continuous development of prototypes longitudinally and studies of their emergent use. We ground the exploration and analysis of use in the everyday world, embracing contingency and open-ended use, through the use of a continuously-available exploratory prototype. Through examining use longitudinally, clearer insight can be gained of realistic, non-novelty usage and appropriation into everyday use. This paper sketches out a framework for design that puts a premium on immediate use and evolving the design in response to use and user feedback. While such design practices with continuously developing systems are common in the design of social technologies, they are little documented. We describe our approach and reflect upon its key characteristics, based on our experiences from two case studies. We also present five major patterns of long-term usage which we found useful for design.
Resumo:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Resumo:
A better understanding of Open Source Innovation in Physical Product (OSIP) might allow project managers to mitigate risks associated with this innovation model and process, while developing the right strategies to maximise OSIP outputs. In the software industry, firms have been highly successful using Open Source Innovation (OSI) strategies. However, OSI in the physical world has not been studied leading to the research question: What advantages and disadvantages do organisations incur from using OSI in physical products? An exploratory research methodology supported by thirteen semi-structured interviews helped us build a seven-theme framework to categorise advantages and disadvantages elements linked with the use of OSIP. In addition, factors impacting advantage and disadvantage elements for firms using OSIP were identified as: „h Degree of openness in OSIP projects; „h Time of release of OSIP in the public domain; „h Use of Open Source Innovation in Software (OSIS) in conjunction with OSIP; „h Project management elements (Project oversight, scope and modularity); „h Firms. Corporate Social Responsibility (CSR) values; „h Value of the OSIP project to the community. This thesis makes a contribution to the body of innovation theory by identifying advantages and disadvantages elements of OSIP. Then, from a contingency perspective it identifies factors which enhance or decrease advantages, or mitigate/ or increase disadvantages of OSIP. In the end, the research clarifies the understanding of OSI by clearly setting OSIP apart from OSIS. The main practical contribution of this paper is to provide manager with a framework to better understand OSIP as well as providing a model, which identifies contingency factors increasing advantage and decreasing disadvantage. Overall, the research allows managers to make informed decisions about when they can use OSIP and how they can develop strategies to make OSIP a viable proposition. In addition, this paper demonstrates that advantages identified in OSIS cannot all be transferred to OSIP, thus OSIP decisions should not be based upon OSIS knowledge.
Resumo:
Power systems in many countries are stressed towards their stability limit. If these stable systems experience any unexpected serious contingencies, or disturbances, there is a significant risk of instability, which may lead to wide-spread blackout. Frequency is a reliable indicator for such instability condition exists on the power system; therefore under-frequency load shedding technique is used to stable the power system by curtail some load. In this paper, the SFR-UFLS model redeveloped to generate optimal load shedding method is that optimally shed load following one single particular contingency event. The proposed optimal load shedding scheme is then tested on the 39-bus New England test system to show the performance against random load shedding scheme.
Resumo:
This paper focuses on information sharing with key suppliers and seeks to explore the factors that might influence its extent and depth. We also investigate how information sharing affects a company’s performance with regards to resource usage, output, and flexibility. Drawing from transaction cost- and contingency theories, several factors, namely environmental uncertainty, demand uncertainty, dependency and, the product life cycle stage are proposed to explain the level of information shared with key suppliers. We develop a model where information sharing mediates the (contingent) factors and company performance. A mail survey was used to collect data from Finnish and Swedish companies. Partial Least Squares analysis was separately performed for each country (n=119, n=102). There was consistent evidence that environmental uncertainty, demand uncertainty and supplier/buyer dependency had explanatory power, whereas no significance was found for the relationship between product life cycle stage and information sharing. The results also confirm previous studies by providing support for a positive relationship between information sharing and performance, where output performance was found to be the most strongly related.
Resumo:
Power system dynamic analysis and security assessment are becoming more significant today due to increases in size and complexity from restructuring, emerging new uncertainties, integration of renewable energy sources, distributed generation, and micro grids. Precise modelling of all contributed elements/devices, understanding interactions in detail, and observing hidden dynamics using existing analysis tools/theorems are difficult, and even impossible. In this chapter, the power system is considered as a continuum and the propagated electomechanical waves initiated by faults and other random events are studied to provide a new scheme for stability investigation of a large dimensional system. For this purpose, the measured electrical indices (such as rotor angle and bus voltage) following a fault in different points among the network are used, and the behaviour of the propagated waves through the lines, nodes, and buses is analyzed. The impact of weak transmission links on a progressive electromechanical wave using energy function concept is addressed. It is also emphasized that determining severity of a disturbance/contingency accurately, without considering the related electromechanical waves, hidden dynamics, and their properties is not secure enough. Considering these phenomena takes heavy and time consuming calculation, which is not suitable for online stability assessment problems. However, using a continuum model for a power system reduces the burden of complex calculations