22 resultados para Maintainability.

em Queensland University of Technology - ePrints Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of constructability integrates individual construction functions and experiences through suitable and timely inputs into early stages of project planning and design. It aims to ease construction processes for a more effective and efficient achievement of overall project objectives. Similarly, the concepts of operability and maintainability integrate the functions and experiences of Operation and Maintenance (O&M) into project planning and design. Various studies suggested that these concepts have been implemented in isolation of each other and thus preventing optimum result in delivering infrastructure projects. This paper explores the integration of these three concepts in order to maximize the benefits of their implementation. It reviews the literature to identify the main O&M concerns, and assesses their association with constructability principles. This provides a structure to develop an extended constructability model that includes O&M concerns. It is anticipated that an extended constructability model that include O&M considerations can lead to a more efficient and effective delivery of infrastructure projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design of a building is a complicated process, having to formulate diverse components through unique tasks involving different personalities and organisations in order to satisfy multi-faceted client requirements. To do this successfully, the project team must encapsulate an integrated design that accommodates various social, economic and legislative factors. Therefore, in this era of increasing global competition integrated design has been increasingly recognised as a solution to deliver value to clients.----- The ‘From 3D to nD modelling’ project at the University of Salford aims to support integrated design; to enable and equip the design and construction industry with a tool that allows users to create, share, contemplate and apply knowledge from multiple perspectives of user requirements (accessibility, maintainability, sustainability, acoustics, crime, energy simulation, scheduling, costing etc.). Thus taking the concept of 3-dimensional computer modelling of the built environment to an almost infinite number of dimensions, to cope with whole-life construction and asset management issues in the design of modern buildings. This paper reports on the development of a vision for how integrated environments that will allow nD-enabled construction and asset management to be undertaken. The project is funded by a four-year platform grant from the Engineering and Physical Sciences Research Council (EPSRC) in the UK; thus awarded to a multi-disciplinary research team, to enable flexibility in the research strategy and to produce leading innovation. This paper reports on the development of a business process and IT vision for how integrated environments will allow nD-enabled construction and asset management to be undertaken. It further develops many of the key issues of a future vision arising from previous CIB W78 conferences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vibration based damage identification methods examine the changes in primary modal parameters or quantities derived from modal parameters. As one method may have advantages over the other under some circumstances, a multi-criteria approach is proposed. Case studies are conducted separately on beam, plate and plate-on-beam structures. Using the numerically simulated modal data obtained through finite element analysis software, algorithms based on flexibility and strain energy changes before and after damage are obtained and used as the indices for the assessment of the state of structural health. Results show that the proposed multi-criteria method is effective in damage identification in these structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Executive Summary The objective of this report was to use the Sydney Opera House as a case study of the application of Building Information Modelling (BIM). The Sydney opera House is a complex, large building with very irregular building configuration, that makes it a challenging test. A number of key concerns are evident at SOH: • the building structure is complex, and building service systems - already the major cost of ongoing maintenance - are undergoing technology change, with new computer based services becoming increasingly important. • the current “documentation” of the facility is comprised of several independent systems, some overlapping and is inadequate to service current and future services required • the building has reached a milestone age in terms of the condition and maintainability of key public areas and service systems, functionality of spaces and longer term strategic management. • many business functions such as space or event management require up-to-date information of the facility that are currently inadequately delivered, expensive and time consuming to update and deliver to customers. • major building upgrades are being planned that will put considerable strain on existing Facilities Portfolio services, and their capacity to manage them effectively While some of these concerns are unique to the House, many will be common to larger commercial and institutional portfolios. The work described here supported a complementary task which sought to identify if a building information model – an integrated building database – could be created, that would support asset & facility management functions (see Sydney Opera House – FM Exemplar Project, Report Number: 2005-001-C-4 Building Information Modelling for FM at Sydney Opera House), a business strategy that has been well demonstrated. The development of the BIMSS - Open Specification for BIM has been surprisingly straightforward. The lack of technical difficulties in converting the House’s existing conventions and standards to the new model based environment can be related to three key factors: • SOH Facilities Portfolio – the internal group responsible for asset and facility management - have already well established building and documentation policies in place. The setting and adherence to well thought out operational standards has been based on the need to create an environment that is understood by all users and that addresses the major business needs of the House. • The second factor is the nature of the IFC Model Specification used to define the BIM protocol. The IFC standard is based on building practice and nomenclature, widely used in the construction industries across the globe. For example the nomenclature of building parts – eg ifcWall, corresponds to our normal terminology, but extends the traditional drawing environment currently used for design and documentation. This demonstrates that the international IFC model accurately represents local practice for building data representation and management. • a BIM environment sets up opportunities for innovative processes that can exploit the rich data in the model and improve services and functions for the House: for example several high-level processes have been identified that could benefit from standardized Building Information Models such as maintenance processes using engineering data, business processes using scheduling, venue access, security data and benchmarking processes using building performance data. The new technology matches business needs for current and new services. The adoption of IFC compliant applications opens the way forward for shared building model collaboration and new processes, a significant new focus of the BIM standards. In summary, SOH current building standards have been successfully drafted for a BIM environment and are confidently expected to be fully developed when BIM is adopted operationally by SOH. These BIM standards and their application to the Opera House are intended as a template for other organisations to adopt for the own procurement and facility management activities. Appendices provide an overview of the IFC Integrated Object Model and an understanding IFC Model Data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Measuring quality attributes of object-oriented designs (e.g. maintainability and performance) has been covered by a number of studies. However, these studies have not considered security as much as other quality attributes. Also, most security studies focus at the level of individual program statements. This approach makes it hard and expensive to discover and fix vulnerabilities caused by design errors. In this work, we focus on the security design of an object oriented application and define a number of security metrics. These metrics allow designers to discover and fix security vulnerabilities at an early stage, and help compare the security of various alternative designs. In particular, we propose seven security metrics to measure Data Encapsulation (accessibility) and Cohesion (interactions) of a given object-oriented class from the point of view of potential information flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Historically, asset management focused primarily on the reliability and maintainability of assets; organisations have since then accepted the notion that a much larger array of processes govern the life and use of an asset. With this, asset management’s new paradigm seeks a holistic, multi-disciplinary approach to the management of physical assets. A growing number of organisations now seek to develop integrated asset management frameworks and bodies of knowledge. This research seeks to complement existing outputs of the mentioned organisations through the development of an asset management ontology. Ontologies define a common vocabulary for both researchers and practitioners who need to share information in a chosen domain. A by-product of ontology development is the realisation of a process architecture, of which there is also no evidence in published literature. To develop the ontology and subsequent asset management process architecture, a standard knowledge-engineering methodology is followed. This involves text analysis, definition and classification of terms and visualisation through an appropriate tool (in this case, the Protégé application was used). The result of this research is the first attempt at developing an asset management ontology and process architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bridges are an important part of society's infrastructure and reliable methods are necessary to monitor them and ensure their safety and efficiency. Bridges deteriorate with age and early detection of damage helps in prolonging the lives and prevent catastrophic failures. Most bridges still in used today were built decades ago and are now subjected to changes in load patterns, which can cause localized distress and if not corrected can result in bridge failure. In the past, monitoring of structures was usually done by means of visual inspection and tapping of the structures using a small hammer. Recent advancements of sensors and information technologies have resulted in new ways of monitoring the performance of structures. This paper briefly describes the current technologies used in bridge structures condition monitoring with its prime focus in the application of acoustic emission (AE) technology in the monitoring of bridge structures and its challenges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Refactoring focuses on improving the reusability, maintainability and performance of programs. However, the impact of refactoring on the security of a given program has received little attention. In this work, we focus on the design of object-oriented applications and use metrics to assess the impact of a number of standard refactoring rules on their security by evaluating the metrics before and after refactoring. This assessment tells us which refactoring steps can increase the security level of a given program from the point of view of potential information flow, allowing application designers to improve their system’s security at an early stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Business process model repositories capture precious knowledge about an organization or a business domain. In many cases, these repositories contain hundreds or even thousands of models and they represent several man-years of effort. Over time, process model repositories tend to accumulate duplicate fragments, as new process models are created by copying and merging fragments from other models. This calls for methods to detect duplicate fragments in process models that can be refactored as separate subprocesses in order to increase readability and maintainability. This paper presents an indexing structure to support the fast detection of clones in large process model repositories. Experiments show that the algorithm scales to repositories with hundreds of models. The experimental results also show that a significant number of non-trivial clones can be found in process model repositories taken from industrial practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Delivering infrastructure projects involves many stakeholders. Their responsibilities and authorities vary over the course of the project lifecycle - from establishing the project parameters and performance requirements, to operating and maintaining the completed infrastructure. To ensure the successful delivery of infrastructure projects, it is important for the project management team to identify and manage the stakeholders and their requirements. This chapter discusses the management of stakeholders in delivering infrastructure projects, from their conception to completion. It includes managing the stakeholders for project selection and involving them to improve project constructability, operability and maintainability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of constructability is to use construction knowledge and experience during all phases of a project, particularly in the earliest phases of planning and design. It facilitates project objectives before delivery stage, and decreases unnecessary costs during construction phase. Despite the extensive use, constructability concept fails to address many issues related to Operation and Maintenance (O&M) of construction projects. Extending constructability concept, to include the O&M issues, could lead to the projects that are not fitted for construction purposes only, but also fitted for use. This study reviews the literature of constructability implementation, its benefits and shortcomings during the infrastructure life cycle, as well as the delivery success factors of infrastructure projects. This contributes to the propose need of a model to improve the effectiveness and efficiency of infrastructure project by extending the concept of constructability to include O&M. Development of such a model can facilitate post-occupancy stakeholders’ participation in a constructability program. It will help infrastructure owners eliminate project reworks, and improve O&M effectiveness and efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As organizations reach to higher levels of business process management maturity, they often find themselves maintaining repositories of hundreds or even thousands of process models, representing valuable knowledge about their operations. Over time, process model repositories tend to accumulate duplicate fragments (also called clones) as new process models are created or extended by copying and merging fragments from other models. This calls for methods to detect clones in process models, so that these clones can be refactored as separate subprocesses in order to improve maintainability. This paper presents an indexing structure to support the fast detection of clones in large process model repositories. The proposed index is based on a novel combination of a method for process model decomposition (specifically the Refined Process Structure Tree), with established graph canonization and string matching techniques. Experiments show that the algorithm scales to repositories with hundreds of models. The experimental results also show that a significant number of non-trivial clones can be found in process model repositories taken from industrial practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As organizations reach higher levels of business process management maturity, they often find themselves maintaining very large process model repositories, representing valuable knowledge about their operations. A common practice within these repositories is to create new process models, or extend existing ones, by copying and merging fragments from other models. We contend that if these duplicate fragments, a.k.a. ex- act clones, can be identified and factored out as shared subprocesses, the repository’s maintainability can be greatly improved. With this purpose in mind, we propose an indexing structure to support fast detection of clones in process model repositories. Moreover, we show how this index can be used to efficiently query a process model repository for fragments. This index, called RPSDAG, is based on a novel combination of a method for process model decomposition (namely the Refined Process Structure Tree), with established graph canonization and string matching techniques. We evaluated the RPSDAG with large process model repositories from industrial practice. The experiments show that a significant number of non-trivial clones can be efficiently found in such repositories, and that fragment queries can be handled efficiently.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.