785 resultados para consumer decision processes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Broad, early definitions of sustainable development have caused confusion and hesitation among local authorities and planning professionals. This confusion has arisen because loosely defined principles of sustainable development have been employed when setting policies and planning projects, and when gauging the efficiencies of these policies in the light of designated sustainability goals. The question of how this theory-rhetoric-practice gap can be filled is the main focus of this chapter. It examines the triple bottom line approach–one of the sustainability accounting approaches widely employed by governmental organisations–and the applicability of this approach to sustainable urban development. The chapter introduces the ‘Integrated Land Use and Transportation Indexing Model’ that incorporates triple bottom line considerations with environmental impact assessment techniques via a geographic, information systems-based decision support system. This model helps decision-makers in selecting policy options according to their economic, environmental and social impacts. Its main purpose is to provide valuable knowledge about the spatial dimensions of sustainable development, and to provide fine detail outputs on the possible impacts of urban development proposals on sustainability levels. In order to embrace sustainable urban development policy considerations, the model is sensitive to the relationship between urban form, travel patterns and socio-economic attributes. Finally, the model is useful in picturing the holistic state of urban settings in terms of their sustainability levels, and in assessing the degree of compatibility of selected scenarios with the desired sustainable urban future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement. ---------------- The Port currently has a network of stormwater sample collection points where event based samples together with grab samples are tested for a range of water quality parameters. Whilst this information provides a ‘snapshot’ of the pollutants being washed from the catchment/s, it does not allow for a quantifiable assessment of total contaminant loads being discharged to the waters of Moreton Bay. It also does not represent pollutant build-up and wash-off from the different land uses across a broader range of rainfall events which might be expected. As such, it is difficult to relate stormwater quality to different pollutant sources within the Port environment. ----------------- Consequently, this would make the source tracking of pollutants to receiving waters extremely difficult and in turn the ability to implement appropriate mitigation measures. Also, without this detailed understanding, the efficacy of the various stormwater quality mitigation measures implemented cannot be determined with certainty. --------------- Current knowledge on port stormwater runoff quality Currently, little knowledge exists with regards to the pollutant generation capacity specific to port land uses as these do not necessarily compare well with conventional urban industrial or commercial land use due to the specific nature of port activities such as inter-modal operations and cargo management. Furthermore, traffic characteristics in a port area are different to a conventional urban area. Consequently, as data inputs based on an industrial and commercial land uses for modelling purposes is questionable. ------------------ A comprehensive review of published research failed to locate any investigations undertaken with regards to pollutant build-up and wash-off for port specific land uses. Furthermore, there is very limited information made available by various ports worldwide about the pollution generation potential of their facilities. Published work in this area has essentially focussed on the water quality or environmental values in the receiving waters such as the downstream bay or estuary. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is the undertaking of ‘cutting edge’ research to strengthen the environmental custodianship of the Port area. This project aims to develop a port specific stormwater quality model to allow informed decision making in relation to stormwater quality improvement in the context of the increased growth of the Port. --------------- Stage 1 of the research project focussed on the assessment of pollutant build-up and wash-off using rainfall simulation from the current Port of Brisbane facilities with the longer-term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. Investigation of complex processes such as pollutant wash-off using naturally occurring rainfall events has inherent difficulties. These can be overcome using simulated rainfall for the investigations. ----------------- The deliverables for Stage 1 included the following: * Pollutant build-up and wash-off profiles for six primary land uses within the Port of Brisbane to be used for water quality model development. * Recommendations with regards to future stormwater quality monitoring and pollution mitigation measures. The outcomes are expected to deliver the following benefits to the Port of Brisbane: * The availability of Port specific pollutant build-up and wash-off data will enable the implementation of customised stormwater pollution mitigation strategies. * The water quality data collected would form the baseline data for a Port specific water quality model for mitigation and predictive purposes. * To be at the cutting-edge in terms of water quality management and environmental best practice in the context of port infrastructure. ---------------- Conclusions: The important conclusions from the study are: * It confirmed that the Port environment is unique in terms of pollutant characteristics and is not comparable to typical urban land uses. * For most pollutant types, the Port land uses exhibited lower pollutant concentrations when compared to typical urban land uses. * The pollutant characteristics varied across the different land uses and were not consistent in terms of the land use. Hence, the implementation of stereotypical structural water quality improvement devices could be of limited value. * The <150m particle size range was predominant in suspended solids for pollutant build-up as well as wash-off. Therefore, if suspended solids are targeted as the surrogate parameter for water quality improvement, this specific particle size range needs to be removed. ------------------- Recommendations: Based on the study results the following preliminary recommendations are made: * Due to the appreciable variation in pollutant characteristics for different port land uses, any water quality monitoring stations should preferably be located such that source areas can be easily identified. * The study results having identified significant pollutants for the different land uses should enable the development of a more customised water quality monitoring and testing regime targeting the critical pollutants. * A ‘one size fits all’ approach may not be appropriate for the different port land uses due to the varying pollutant characteristics. As such, pollution mitigation will need to be specifically tailored to suit the specific land use. * Any structural measures implemented for pollution mitigation to be effective should have the capability to remove suspended solids of size <150m. * Based on the results presented and the particularly the fact that the Port land uses cannot be compared to conventional urban land uses in relation to pollutant generation, consideration should be given to the development of a port specific water quality model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction - The planning for healthy cities faces significant challenges due to lack of effective information, systems and a framework to organise that information. Such a framework is critical in order to make accessible and informed decisions for planning healthy cities. The challenges for planning healthy cities have been magnified by the rise of the healthy cities movement, as a result of which, there have been more frequent calls for localised, collaborative and knowledge-based decisions. Some studies have suggested that the use of a ‘knowledge-based’ approach to planning will enhance the accuracy and quality decision-making by improving the availability of data and information for health service planners and may also lead to increased collaboration between stakeholders and the community. A knowledge-based or evidence-based approach to decision-making can provide an ‘out-of-the-box’ thinking through the use of technology during decision-making processes. Minimal research has been conducted in this area to date, especially in terms of evaluating the impact of adopting knowledge-based approach on stakeholders, policy-makers and decision-makers within health planning initiatives. Purpose – The purpose of the paper is to present an integrated method that has been developed to facilitate a knowledge-based decision-making process to assist health planning Methodology – Specifically, the paper describes the participatory process that has been adopted to develop an online Geographic Information System (GIS)-based Decision Support System (DSS) for health planners. Value – Conceptually, it is an application of Healthy Cities and Knowledge Cities approaches which are linked together. Specifically, it is a unique settings-based initiative designed to plan for and improve the health capacity of Logan-Beaudesert area, Australia. This setting-based initiative is named as the Logan-Beaudesert Health Coalition (LBHC). Practical implications - The paper outlines the application of a knowledge-based approach to the development of a healthy city. Also, it focuses on the need for widespread use of this approach as a tool for enhancing community-based health coalition decision making processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.