229 resultados para performance-based design
Resumo:
This paper presents a global-optimisation frame-work for the design of a manipulator for harvesting capsicum(peppers) in the field. The framework uses a simulated capsicum scenario with automatically generated robot models based on DH parameters. Each automatically generated robot model is then placed in the simulated capsicum scenario and the ability of the robot model to get to several goals (capsicum with varying orientations and positions) is rated using two criteria:the length of a collision-free path and the dexterity of the end-effector. These criteria form the basis of the objective function used to perform a global optimisation. The paper shows a preliminary analysis and results that demonstrate the potential of this method to choose suitable robot models with varying degrees of freedom.
Resumo:
Performance based planning (PBP) is purported to be a viable alternative to traditional zoning. The implementation of PBP ranges between pure approaches that rely on predetermined quantifiable performance standards to determine land use suitability, and hybrid approaches that rely on a mix of activity based zones in addition to prescriptive and subjective standards. Jurisdictions in the USA, Australia and New Zealand have attempted this type of land use regulation with varying degrees of success. Despite the adoption of PBP legislation in these jurisdictions, this paper argues that a lack of extensive evaluation means that PBP is not well understood and the purported advantages of this type of planning are rarely achieved in practice. Few empirical studies have attempted to examine how PBP has been implemented in practice. In Queensland, Australia, the Integrated Planning Act 1997 (IPA) operated as Queensland's principal planning legislation between March 1998 and December 2009. While the IPA did not explicitly use the term performance based planning, the Queensland's planning system is widely considered to be performance based in practice. Significantly, the IPA prevented Local Government from prohibiting development or use and the term zone was absent from the legislation. How plan-making would be advanced under the new planning regime was not clear, and as a consequence local governments produced a variety of different plan-making approaches to comply with the new legislative regime. In order to analyse this variation the research has developed a performance adoption spectrum to classify plans ranging between pure and hybrid perspectives of PBP. The spectrum compares how land use was regulated in seventeen IPA plans across Queensland. The research found that hybrid plans predominated, and that over time a greater reliance on risk adverse drafting approaches created a quasi-prohibition plan, the exact opposite of what was intended by the IPA. This paper concludes that the drafting of the IPA and absence of plan-making guidance contributed to lack of shared understanding about the intended direction of the new planning system and resulted in many administrative interpretations of the legislation. It was a planning direction that tried too hard to be different, and as a result created a perception of land use risk and uncertainty that caused a return to more prescriptive and inflexible plan-making methods.
Resumo:
Post-earthquake fire (PEF) is considered one of the most high risk and complicated problems affecting buildings in urban areas and can cause even more damage than the earthquake itself. However, most standards and codes ignore the implications of PEF and so buildings are not normally designed with PEF in mind. What is needed is for PEF factors to be routinely scrutinized and codified as part of the design process. A systematic application is presented as a means of mitigating the risk of PEF in urban buildings. This covers both existing buildings, in terms of retrofit solutions, and those yet to be designed, where a PEF factor is proposed. To ensure the mitigation strategy meets the defined criteria, a minimum time is defined – the safety guaranteed time target – where the safety of the inhabitants in a building is guaranteed.
Resumo:
Efficient and accurate geometric and material nonlinear analysis of the structures under ultimate loads is a backbone to the success of integrated analysis and design, performance-based design approach and progressive collapse analysis. This paper presents the advanced computational technique of a higher-order element formulation with the refined plastic hinge approach which can evaluate the concrete and steel-concrete structure prone to the nonlinear material effects (i.e. gradual yielding, full plasticity, strain-hardening effect when subjected to the interaction between axial and bending actions, and load redistribution) as well as the nonlinear geometric effects (i.e. second-order P-d effect and P-D effect, its associate strength and stiffness degradation). Further, this paper also presents the cross-section analysis useful to formulate the refined plastic hinge approach.
Resumo:
The computational technique of the full ranges of the second-order inelastic behaviour evaluation of steel-concrete composite structure is not always sought forgivingly, and therefore it hinders the development and application of the performance-based design approach for the composite structure. To this end, this paper addresses of the advanced computational technique of the higher-order element with the refined plastic hinges to capture the all-ranges behaviour of an entire steel-concrete composite structure. Moreover, this paper presents the efficient and economical cross-section analysis to evaluate the element section capacity of the non-uniform and arbitrary composite section subjected to the axial and bending interaction. Based on the same single algorithm, it can accurately and effectively evaluate nearly continuous interaction capacity curve from decompression to pure bending technically, which is the important capacity range but highly nonlinear. Hence, this cross-section analysis provides the simple but unique algorithm for the design approach. In summary, the present nonlinear computational technique can simulate both material and geometric nonlinearities of the composite structure in the accurate, efficient and reliable fashion, including partial shear connection and gradual yielding at pre-yield stage, plasticity and strain-hardening effect due to axial and bending interaction at post-yield stage, loading redistribution, second-order P-δ and P-Δ effect, and also the stiffness and strength deterioration. And because of its reliable and accurate behavioural evaluation, the present technique can be extended for the design of the high-strength composite structure and potentially for the fibre-reinforced concrete structure.
Resumo:
There are many applications in aeronautics where there exist strong couplings between disciplines. One practical example is within the context of Unmanned Aerial Vehicle(UAV) automation where there exists strong coupling between operation constraints, aerodynamics, vehicle dynamics, mission and path planning. UAV path planning can be done either online or offline. The current state of path planning optimisation online UAVs with high performance computation is not at the same level as its ground-based offline optimizer's counterpart, this is mainly due to the volume, power and weight limitations on the UAV; some small UAVs do not have the computational power needed for some optimisation and path planning task. In this paper, we describe an optimisation method which can be applied to Multi-disciplinary Design Optimisation problems and UAV path planning problems. Hardware-based design optimisation techniques are used. The power and physical limitations of UAV, which may not be a problem in PC-based solutions, can be approached by utilizing a Field Programmable Gate Array (FPGA) as an algorithm accelerator. The inevitable latency produced by the iterative process of an Evolutionary Algorithm (EA) is concealed by exploiting the parallelism component within the dataflow paradigm of the EA on an FPGA architecture. Results compare software PC-based solutions and the hardware-based solutions for benchmark mathematical problems as well as a simple real world engineering problem. Results also indicate the practicality of the method which can be used for more complex single and multi objective coupled problems in aeronautical applications.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
Shrinking product lifecycles, tough international competition, swiftly changing technologies, ever increasing customer quality expectation and demanding high variety options are some of the forces that drive next generation of development processes. To overcome these challenges, design cost and development time of product has to be reduced as well as quality to be improved. Design reuse is considered one of the lean strategies to win the race in this competitive environment. design reuse can reduce the product development time, product development cost as well as number of defects which will ultimately influence the product performance in cost, time and quality. However, it has been found that no or little work has been carried out for quantifying the effectiveness of design reuse in product development performance such as design cost, development time and quality. Therefore, in this study we propose a systematic design reuse based product design framework and developed a design leanness index (DLI) as a measure of effectiveness of design reuse. The DLI is a representative measure of reuse effectiveness in cost, development time and quality. Through this index, a clear relationship between reuse measure and product development performance metrics has been established. Finally, a cost based model has been developed to maximise the design leanness index for a product within the given set of constraints achieving leanness in design process.
Resumo:
Effective strategies for the design of effi cient and environmentally sensitive buildings require a close collaboration between architects and engineers in the design of the building shell and environmental control systems at the outset of projects. However, it is often not practical for engineers to be involved early on in the design process. It is therefore essential that architects be able to perform preliminary energy analyses to evaluate their proposed designs prior to the major building characteristics becoming fi xed. Subsequently, a need exists for a simplifi ed energy design tool for architects. This paper discusses the limitations of existing analysis software in supporting early design explorations and proposes a framework for the development of a tool that provides decision support by permitting architects to quickly assess the performance of design alternatives.
Resumo:
In 2008, a three-year pilot ‘pay for performance’ (P4P) program, known as ‘Clinical Practice Improvement Payment’ (CPIP) was introduced into Queensland Health (QHealth). QHealth is a large public health sector provider of acute, community, and public health services in Queensland, Australia. The organisation has recently embarked on a significant reform agenda including a review of existing funding arrangements (Duckett et al., 2008). Partly in response to this reform agenda, a casemix funding model has been implemented to reconnect health care funding with outcomes. CPIP was conceptualised as a performance-based scheme that rewarded quality with financial incentives. This is the first time such a scheme has been implemented into the public health sector in Australia with a focus on rewarding quality, and it is unique in that it has a large state-wide focus and includes 15 Districts. CPIP initially targeted five acute and community clinical areas including Mental Health, Discharge Medication, Emergency Department, Chronic Obstructive Pulmonary Disease, and Stroke. The CPIP scheme was designed around key concepts including the identification of clinical indicators that met the set criteria of: high disease burden, a well defined single diagnostic group or intervention, significant variations in clinical outcomes and/or practices, a good evidence, and clinician control and support (Ward, Daniels, Walker & Duckett, 2007). This evaluative research targeted Phase One of implementation of the CPIP scheme from January 2008 to March 2009. A formative evaluation utilising a mixed methodology and complementarity analysis was undertaken. The research involved three research questions and aimed to determine the knowledge, understanding, and attitudes of clinicians; identify improvements to the design, administration, and monitoring of CPIP; and determine the financial and economic costs of the scheme. Three key studies were undertaken to ascertain responses to the key research questions. Firstly, a survey of clinicians was undertaken to examine levels of knowledge and understanding and their attitudes to the scheme. Secondly, the study sought to apply Statistical Process Control (SPC) to the process indicators to assess if this enhanced the scheme and a third study examined a simple economic cost analysis. The CPIP Survey of clinicians elicited 192 clinician respondents. Over 70% of these respondents were supportive of the continuation of the CPIP scheme. This finding was also supported by the results of a quantitative altitude survey that identified positive attitudes in 6 of the 7 domains-including impact, awareness and understanding and clinical relevance, all being scored positive across the combined respondent group. SPC as a trending tool may play an important role in the early identification of indicator weakness for the CPIP scheme. This evaluative research study supports a previously identified need in the literature for a phased introduction of Pay for Performance (P4P) type programs. It further highlights the value of undertaking a formal risk assessment of clinician, management, and systemic levels of literacy and competency with measurement and monitoring of quality prior to a phased implementation. This phasing can then be guided by a P4P Design Variable Matrix which provides a selection of program design options such as indicator target and payment mechanisms. It became evident that a clear process is required to standardise how clinical indicators evolve over time and direct movement towards more rigorous ‘pay for performance’ targets and the development of an optimal funding model. Use of this matrix will enable the scheme to mature and build the literacy and competency of clinicians and the organisation as implementation progresses. Furthermore, the research identified that CPIP created a spotlight on clinical indicators and incentive payments of over five million from a potential ten million was secured across the five clinical areas in the first 15 months of the scheme. This indicates that quality was rewarded in the new QHealth funding model, and despite issues being identified with the payment mechanism, funding was distributed. The economic model used identified a relative low cost of reporting (under $8,000) as opposed to funds secured of over $300,000 for mental health as an example. Movement to a full cost effectiveness study of CPIP is supported. Overall the introduction of the CPIP scheme into QHealth has been a positive and effective strategy for engaging clinicians in quality and has been the catalyst for the identification and monitoring of valuable clinical process indicators. This research has highlighted that clinicians are supportive of the scheme in general; however, there are some significant risks that include the functioning of the CPIP payment mechanism. Given clinician support for the use of a pay–for-performance methodology in QHealth, the CPIP scheme has the potential to be a powerful addition to a multi-faceted suite of quality improvement initiatives within QHealth.
Resumo:
Global warming is entailing new climatic conditions for the built environment. Such a warming climate will affect both the performance of existing building stock and the design of new buildings. In this article, the knowledge of global warming and climate change is first introduced. The cycling interaction between global warming and buildings is then presented. The impact of global warming on building energy use and thermal performance is also assessed. Finally, the potential mitigation and adaptation strategies to the global warming are discussed.
Resumo:
In recent times, light gauge steel framed (LSF) structures, such as cold-formed steel wall systems, are increasingly used, but without a full understanding of their fire performance. Traditionally the fire resistance rating of these load-bearing LSF wall systems is based on approximate prescriptive methods developed based on limited fire tests. Very often they are limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to these walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these wall systems. Hence a detailed fire research study into the performance of LSF wall systems was undertaken using full scale fire tests and extensive numerical studies. A new composite wall panel developed at QUT was also considered in this study, where the insulation was used externally between the plasterboards on both sides of the steel wall frame instead of locating it in the cavity. Three full scale fire tests of LSF wall systems built using the new composite panel system were undertaken at a higher load ratio using a gas furnace designed to deliver heat in accordance with the standard time temperature curve in AS 1530.4 (SA, 2005). Fire tests included the measurements of load-deformation characteristics of LSF walls until failure as well as associated time-temperature measurements across the thickness and along the length of all the specimens. Tests of LSF walls under axial compression load have shown the improvement to their fire performance and fire resistance rating when the new composite panel was used. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. The numerical study was undertaken using a finite element program ABAQUS. The finite element analyses were conducted under both steady state and transient state conditions using the measured hot and cold flange temperature distributions from the fire tests. The elevated temperature reduction factors for mechanical properties were based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). These finite element models were first validated by comparing their results with experimental test results from this study and Kolarkar (2010). The developed finite element models were able to predict the failure times within 5 minutes. The validated model was then used in a detailed numerical study into the strength of cold-formed thin-walled steel channels used in both the conventional and the new composite panel systems to increase the understanding of their behaviour under nonuniform elevated temperature conditions and to develop fire design rules. The measured time-temperature distributions obtained from the fire tests were used. Since the fire tests showed that the plasterboards provided sufficient lateral restraint until the failure of LSF wall panels, this assumption was also used in the analyses and was further validated by comparison with experimental results. Hence in this study of LSF wall studs, only the flexural buckling about the major axis and local buckling were considered. A new fire design method was proposed using AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the above design codes to predict the failure load ratio versus time and temperature for varying LSF wall configurations including insulations. Idealised time-temperature profiles were developed based on the measured temperature values of the studs. This was used in a detailed numerical study to fully understand the structural behaviour of LSF wall panels. Appropriate equations were proposed to find the critical temperatures for different composite panels, varying in steel thickness, steel grade and screw spacing for any load ratio. Hence useful and simple design rules were proposed based on the current cold-formed steel structures and fire design standards, and their accuracy and advantages were discussed. The results were also used to validate the fire design rules developed based on AS/NZS 4600 (SA, 2005) and Eurocode Part 1.3 (ECS, 2006). This demonstrated the significant improvements to the design method when compared to the currently used prescriptive design methods for LSF wall systems under fire conditions. In summary, this research has developed comprehensive experimental and numerical thermal and structural performance data for both the conventional and the proposed new load bearing LSF wall systems under standard fire conditions. Finite element models were developed to predict the failure times of LSF walls accurately. Idealized hot flange temperature profiles were developed for non-insulated, cavity and externally insulated load bearing wall systems. Suitable fire design rules and spread sheet based design tools were developed based on the existing standards to predict the ultimate failure load, failure times and failure temperatures of LSF wall studs. Simplified equations were proposed to find the critical temperatures for varying wall panel configurations and load ratios. The results from this research are useful to both structural and fire engineers and researchers. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF loadbearing walls under standard fire conditions.
Resumo:
This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.
Resumo:
The construction phase of building projects is often a crucial influencing factor in success or failure of projects. Project managers are believed to play a significant role in firms’ success and competitiveness. Therefore, it is important for firms to better understand the demands of managing projects and the competencies that project managers require for more effective project delivery. In a survey of building project managers in the state of Queensland, Australia, it was found that management and information management system are the top ranking competencies required by effective project managers. Furthermore, a significant number of respondents identified the site manager, construction manager and client’s representative as the three individuals whose close and regular contacts with project managers have the greatest influence on the project managers’ performance. Based on these findings, an intra-project workgroups model is proposed to help project managers facilitate more effective management of people and information on building projects.
Resumo:
Background The onsite treatment of sewage and effluent disposal within the premises is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the seemingly low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. Therefore it is important that careful consideration is given to the design and location of onsite sewage treatment systems. It requires an understanding of the factors that influence treatment performance. The use of subsurface effluent absorption systems is the most common form of effluent disposal for onsite sewage treatment and particularly for septic tanks. Additionally in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Therefore location specific factors will play a key role in this context. The project The primary aims of the research project are: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to identify important areas where there is currently a lack of relevant research knowledge and is in need of further investigation. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of the research project has been on septic tanks. Therefore by implication the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. In the evaluation to be undertaken, the treatment performance of soil absorption systems will be related to the physico-chemical characteristics of the soil. Five broad categories of soil types have been considered for this purpose. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each soil types. In the initial phase of the investigation, though the majority of the systems evaluated were septic tanks, a small number of aerobic wastewater treatment systems (AWTS) were also included. This was primarily to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of different types of systems investigated was relatively small. As such this does not permit a statistical analysis to be undertaken of the results obtained. This is an important issue considering the large number of parameters that can influence treatment performance and their wide variability. The report This report is the second in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The work undertaken included site investigation and testing of sewage effluent and soil samples taken at distances of 1 and 3 m from the effluent disposal area. The project component discussed in the current report formed the basis for the more detailed investigation undertaken subsequently. The outcomes from the initial studies have been discussed, which enabled the identification of factors to be investigated further. Primarily, this report contains the results of the field monitoring program, the initial analysis undertaken and preliminary conclusions. Field study and outcomes Initially commencing with a list of 252 locations in 17 different suburbs, a total of 22 sites in 21 different locations were monitored. These sites were selected based on predetermined criteria. To obtain house owner agreement to participate in the monitoring study was not an easy task. Six of these sites had to be abandoned subsequently due to various reasons. The remaining sites included eight septic systems with subsurface effluent disposal and treating blackwater or combined black and greywater, two sites treating greywater only and six sites with AWTS. In addition to collecting effluent and soil samples from each site, a detailed field investigation including a series of house owner interviews were also undertaken. Significant observations were made during the field investigations. In addition to site specific observations, the general observations include the following: • Most house owners are unaware of the need for regular maintenance. Sludge removal has not been undertaken in any of the septic tanks monitored. Even in the case of aerated wastewater treatment systems, the regular inspections by the supplier is confined only to the treatment system and does not include the effluent disposal system. This is not a satisfactory situation as the investigations revealed. • In the case of separate greywater systems, only one site had a suitably functioning disposal arrangement. The general practice is to employ a garden hose to siphon the greywater for use in surface irrigation of the garden. • In most sites, the soil profile showed significant lateral percolation of effluent. As such, the flow of effluent to surface water bodies is a distinct possibility. • The need to investigate the subsurface condition to a depth greater than what is required for the standard percolation test was clearly evident. On occasion, seemingly permeable soil was found to have an underlying impermeable soil layer or vice versa. The important outcomes from the testing program include the following: • Though effluent treatment is influenced by the physico-chemical characteristics of the soil, it was not possible to distinguish between the treatment performance of different soil types. This leads to the hypothesis that effluent renovation is significantly influenced by the combination of various physico-chemical parameters rather than single parameters. This would make the processes involved strongly site specific. • Generally the improvement in effluent quality appears to take place only within the initial 1 m of travel and without any appreciable improvement thereafter. This relates only to the degree of improvement obtained and does not imply that this quality is satisfactory. This calls into question the value of adopting setback distances from sensitive water bodies. • Use of AWTS for sewage treatment may provide effluent of higher quality suitable for surface disposal. However on the whole, after a 1-3 m of travel through the subsurface, it was not possible to distinguish any significant differences in quality between those originating from septic tanks and AWTS. • In comparison with effluent quality from a conventional wastewater treatment plant, most systems were found to perform satisfactorily with regards to Total Nitrogen. The success rate was much lower in the case of faecal coliforms. However it is important to note that five of the systems exhibited problems with regards to effluent disposal, resulting in surface flow. This could lead to possible contamination of surface water courses. • The ratio of TDS to EC is about 0.42 whilst the optimum recommended value for use of treated effluent for irrigation should be about 0.64. This would mean a higher salt content in the effluent than what is advisable for use in irrigation. A consequence of this would be the accumulation of salts to a concentration harmful to crops or the landscape unless adequate leaching is present. These relatively high EC values are present even in the case of AWTS where surface irrigation of effluent is being undertaken. However it is important to note that this is not an artefact of the treatment process but rather an indication of the quality of the wastewater generated in the household. This clearly indicates the need for further research to evaluate the suitability of various soil types for the surface irrigation of effluent where the TDS/EC ratio is less than 0.64. • Effluent percolating through the subsurface absorption field may travel in the form of dilute pulses. As such the effluent will move through the soil profile forming fronts of elevated parameter levels. • The downward flow of effluent and leaching of the soil profile is evident in the case of podsolic, lithosol and kransozem soils. Lateral flow of effluent is evident in the case of prairie soils. Gleyed podsolic soils indicate poor drainage and ponding of effluent. In the current phase of the research project, a number of chemical indicators such as EC, pH and chloride concentration were employed as indicators to investigate the extent of effluent flow and to understand how soil renovates effluent. The soil profile, especially texture, structure and moisture regime was examined more in an engineering sense to determine the effect of movement of water into and through the soil. However it is not only the physical characteristics, but the chemical characteristics of the soil also play a key role in the effluent renovation process. Therefore in order to understand the complex processes taking place in a subsurface effluent disposal area, it is important that the identified influential parameters are evaluated using soil chemical concepts. Consequently the primary focus of the next phase of the research project will be to identify linkages between various important parameters. The research thus envisaged will help to develop robust criteria for evaluating the performance of subsurface disposal systems.