941 resultados para performance specification
em Queensland University of Technology - ePrints Archive
Resumo:
Executive Summary The objective of this report was to use the Sydney Opera House as a case study of the application of Building Information Modelling (BIM). The Sydney opera House is a complex, large building with very irregular building configuration, that makes it a challenging test. A number of key concerns are evident at SOH: • the building structure is complex, and building service systems - already the major cost of ongoing maintenance - are undergoing technology change, with new computer based services becoming increasingly important. • the current “documentation” of the facility is comprised of several independent systems, some overlapping and is inadequate to service current and future services required • the building has reached a milestone age in terms of the condition and maintainability of key public areas and service systems, functionality of spaces and longer term strategic management. • many business functions such as space or event management require up-to-date information of the facility that are currently inadequately delivered, expensive and time consuming to update and deliver to customers. • major building upgrades are being planned that will put considerable strain on existing Facilities Portfolio services, and their capacity to manage them effectively While some of these concerns are unique to the House, many will be common to larger commercial and institutional portfolios. The work described here supported a complementary task which sought to identify if a building information model – an integrated building database – could be created, that would support asset & facility management functions (see Sydney Opera House – FM Exemplar Project, Report Number: 2005-001-C-4 Building Information Modelling for FM at Sydney Opera House), a business strategy that has been well demonstrated. The development of the BIMSS - Open Specification for BIM has been surprisingly straightforward. The lack of technical difficulties in converting the House’s existing conventions and standards to the new model based environment can be related to three key factors: • SOH Facilities Portfolio – the internal group responsible for asset and facility management - have already well established building and documentation policies in place. The setting and adherence to well thought out operational standards has been based on the need to create an environment that is understood by all users and that addresses the major business needs of the House. • The second factor is the nature of the IFC Model Specification used to define the BIM protocol. The IFC standard is based on building practice and nomenclature, widely used in the construction industries across the globe. For example the nomenclature of building parts – eg ifcWall, corresponds to our normal terminology, but extends the traditional drawing environment currently used for design and documentation. This demonstrates that the international IFC model accurately represents local practice for building data representation and management. • a BIM environment sets up opportunities for innovative processes that can exploit the rich data in the model and improve services and functions for the House: for example several high-level processes have been identified that could benefit from standardized Building Information Models such as maintenance processes using engineering data, business processes using scheduling, venue access, security data and benchmarking processes using building performance data. The new technology matches business needs for current and new services. The adoption of IFC compliant applications opens the way forward for shared building model collaboration and new processes, a significant new focus of the BIM standards. In summary, SOH current building standards have been successfully drafted for a BIM environment and are confidently expected to be fully developed when BIM is adopted operationally by SOH. These BIM standards and their application to the Opera House are intended as a template for other organisations to adopt for the own procurement and facility management activities. Appendices provide an overview of the IFC Integrated Object Model and an understanding IFC Model Data.
Resumo:
We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.
Resumo:
Performance based planning is a form of planning regulation that is not well understood and the theoretical advantages of this type of planning are rarely achieved in practice. Normatively, this type of regulation relies on performance standards that are quantifiable and technically based which are designed to manage the effects of development, where performance standards provide certainty in respect of the level of performance and the means of achievement is flexible. Few empirical studies have attempted to examine how performance based planning has been conceptualised and implemented in practice. Existing literature is predominately anecdotal and consultant based (Baker et al. 2006) and has not sought to quantitatively examine how land use has been managed or determine how context influences implementation. The Integrated Planning Act 1997 (IPA) operated as Queensland’s principal planning legislation between March 1998 and December 2009. The IPA prevented Local Governments from prohibiting development or use and the term zone was absent from the legislation. While the IPA did not use the term performance based planning, the system is widely considered to be performance based in practice (e.g. Baker et al. 2006; Steele 2009a, 2009b). However, the degree to which the IPA and the planning system in Queensland is performance based is debated (e.g. Yearbury 1998; England 2004). Four research questions guided the research framework using Queensland as the case study. The questions sought to: determine if there is a common understanding of performance based planning; identify how performance based planning was expressed under the IPA; understand how performance based planning was implemented in plans; and explore the experiences of participants in the planning system. The research developed a performance adoption spectrum. The spectrum describes how performance based planning is implemented, ranging between pure and hybrid interpretations. An ex-post evaluation of seventeen IPA plans sought to determine plan performativity within the conceptual spectrum. Land use was examined from the procedural dimension of performance (Assessment Tables) and the substantive dimension of performance (Codes). A documentary analysis and forty one interviews supplemented the research. The analytical framework considered how context influenced performance based planning, including whether: the location of the local government affected land use management techniques; temporal variation in implementation exists; plan-making guidelines affected implementation; different perceptions of the concept exist; this type of planning applies to a range of spatial scales. Outcomes were viewed as the medium for determining the acceptability of development in Queensland, a significant departure from pure approaches found in the United States. Interviews highlighted the absence of plan-making direction in the IPA, which contributed to the confusion about the intended direction of the planning system and the myth that the IPA would guarantee a performance based system. A hybridised form of performance based planning evolved in Queensland which was dependent on prescriptive land use zones and specification of land use type, with some local governments going to extreme lengths to discourage certain activities in a predetermined manner. Context had varying degrees of influence on plan-making methods. Decision-making was found to be inconsistent and the system created a range of unforeseen consequences including difficulties associated with land valuation, increased development speculation, and the role of planners in court was found to be less critical than in the previous planning system.
Resumo:
This article explores an important temporal aspect of the design of strategic alliances by focusing on the issue of time bounds specification. Time bounds specification refers to a choice on behalf of prospective alliance partners at the time of alliance formation to either pre-specify the duration of an alliance to a specific time window, or to keep the alliance open-ended (Reuer & Ariňo, 2007). For instance, Das (2006) mentions the example of the alliance between Telemundo Network and Mexican Argos Comunicacion (MAC). Announced in October 2000, this alliance entailed a joint production of 1200 hours of comedy, news, drama, reality and novella programs (Das, 2006). Conditioned on the projected date of completing the 1200 hours of programs, Telemundo Network and MAC pre-specified the time bounds of the alliance ex ante. Such time-bound alliances are said to be particularly prevalent in project-based industries, like movie production, construction, telecommunications and pharmaceuticals (Schwab & Miner, 2008). In many other instances, however, firms may choose to keep their alliances open-ended, not specifying a specific time bound at the time of alliance formation. The choice between designing open-ended alliances that are “built to last”, versus time bound alliances that are “meant to end” is important. Seminal works like Axelrod (1984), Heide & Miner (1992), and Parkhe (1993) demonstrated that the choice to place temporal bounds on a collaborative venture has important implications. More specifically, collaborations that have explicit, short term time bounds (i.e. what is termed a shorter “shadow of the future”) are more likely to experience opportunism (Axelrod, 1984), are more likely to focus on the immediate present (Bakker, Boros, Kenis & Oerlemans, 2012), and are less likely to develop trust (Parkhe, 1993) than alliances for which time bounds are kept indeterminate. These factors, in turn, have been shown to have important implications for the performance of alliances (e.g. Kale, Singh & Perlmutter, 2000). Thus, there seems to be a strong incentive for organizations to form open-ended strategic alliances. And yet, Reuer & Ariňo (2007), one of few empirical studies that details the prevalence of time-bound and open-ended strategic alliances, found that about half (47%) of the alliances in their sample were time bound, the other half were open-ended. What conditions, then, determine this choice?
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
The introduction of safety technologies into complex socio-technical systems requires an integrated and holistic approach to HF and engineering, considering the effects of failures not only within system boundaries, but also at the interfaces with other systems and humans. Level crossing warning devices are examples of such systems where technically safe states within the system boundary can influence road user performance, giving rise to other hazards that degrade safety of the system. Chris will discuss the challenges that have been encountered to date in developing a safety argument in support of low-cost level crossing warning devices. The design and failure modes of level crossing warning devices are known to have a significant influence on road user performance; however, quantifying this effect is one of the ongoing challenges in determining appropriate reliability and availability targets for low-cost level crossing warning devices.
Resumo:
Engineering design processes are necessary to attain the requisite standards of integrity for high-assurance safety-related systems. Additionally, human factors design initiatives can provide critical insights that parameterise their development. Unfortunately, the popular perception of human factors as a “forced marriage” between engineering and psychology often provokes views where the ‘human factor’ is perceived as a threat to systems design. Some popular performance-based standards for developing safety-related systems advocate identifying and managing human factors throughout the system lifecycle. However, they also have a tendency to fall short in their guidance on the application of human factors methods and tools, let alone how the outputs generated can be integrated in to various stages of the design process. This case study describes a project that converged engineering with human factors to develop a safety argument for new low-cost railway level crossing technology for system-wide implementation in Australia. The paper enjoins the perspectives of a software engineer and cognitive psychologist and their involvement in the project over two years of collaborative work to develop a safety argument for low-cost level crossing technology. Safety and reliability requirements were informed by applying human factors analytical tools that supported the evaluation and quantification of human reliability where users interfaced with the technology. The project team was confronted with significant challenges in cross-disciplinary engagement, particularly with the complexities of dealing with incongruences in disciplinary language. They were also encouraged to think ‘outside the box’ as to how users of a system interpreted system states and ehaviour. Importantly, some of these states, while considered safe within the boundary of the constituent systems that implemented safety-related functions, could actually lead the users to engage in deviant behaviour. Psychology explained how user compliance could be eroded to levels that effectively undermined levels of risk reduction afforded by systems. Linking the engineering and psychology disciplines intuitively, overall safety performance was improved by introducing technical requirements and making design decisions that minimized the system states and behaviours that led to user deviancy. As a commentary on the utility of transdisciplinary collaboration for technical specification, the processes used to bridge the two disciplines are conceptualised in a graphical model.
Resumo:
Traditionally, the fire resistance rating of Light gauge steel frame (LSF) wall systems is based on approximate prescriptive methods developed using limited fire tests. These fire tests are conducted using standard fire time-temperature curve given in ISO 834. However, in recent times fire has become a major disaster in buildings due to the increase in fire loads as a result of modern furniture and lightweight construction, which make use of thermoplastics materials, synthetic foams and fabrics. Therefore a detailed research study into the performance of load bearing LSF wall systems under both standard and realistic design fires on one side was undertaken to develop improved fire design rules. This study included both full scale fire tests and numerical studies of eight different LSF wall systems conducted for both the standard fire curve and the recently developed realistic design fire curves. The use of previous fire design rules developed for LSF walls subjected to non-uniform elevated temperature distributions based on AISI design manual and Eurocode 3 Parts 1.2 and 1.3 was investigated first. New simplified fire design rules based on AS/NZS 4600, North American Specification and Eurocode 3 Part 1.3 were then proposed with suitable allowances for the interaction effects of compression and bending actions. The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated and their effects were included. A spread sheet based design tool was developed based on the new design rules to predict the failure load ratio versus time and temperature curves for varying LSF wall configurations. The accuracy of the proposed design rules was verified using the fire test and finite element analysis results for various wall configurations, steel grades, thicknesses and load ratios under both standard and realistic design fire conditions. A simplified method was also proposed to predict the fire resistance rating of LSF walls based on two sets of equations developed for the load ratio-hot flange temperature and the time-temperature relationships. This paper presents the details of this study on LSF wall systems under fire conditions and the results.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.