926 resultados para Environmental Decision Suport System


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional project management techniques are not always sufficient for ensuring time, cost and quality achievement of large-scale construction projects due to complexity in planning and implementation processes. The main reasons for project non-achievement are changes in scope and design, changes in Government policies and regulations, unforeseen inflation) under-estimation and improper estimation. Projects that are exposed to such an uncertain environment can be effectively managed with the application of risk numagement throughout project life cycle. However, the effectiveness of risk management depends on the technique in which the effects of risk factors are analysed and! or quantified. This study proposes Analytic Hierarchy Process (AHP), a multiple attribute decision-making technique as a tool for risk analysis because it can handle subjective as well as objective factors in decision model that are conflicting in nature. This provides a decision support system (DSS) to project managenumt for making the right decision at the right time for ensuring project success in line with organisation policy, project objectives and competitive business environment. The whole methodology is explained through a case study of a cross-country petroleum pipeline project in India and its effectiveness in project1nana.gement is demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a Decision Support System framework based on Constrain Logic Programming and offers suggestions for using RFID technology to improve several of the critical procedures involved. This paper suggests that a widely distributed and semi-structured network of waste producing and waste collecting/processing enterprises can improve their planning both by the proposed Decision Support System, but also by implementing RFID technology to update and validate information in a continuous manner. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many local authorities (LAs) are currently working to reduce both greenhouse gas emissions and the amount of municipal solid waste (MSW) sent to landfill. The recovery of energy from waste (EfW) can assist in meeting both of these objectives. The choice of an EfW policy combines spatial and non-spatial decisions which may be handled using Multi-Criteria Analysis (MCA) and Geographic Information Systems (GIS). This paper addresses the impact of transporting MSW to EfW facilities, analysed as part of a larger decision support system designed to make an overall policy assessment of centralised (large-scale) and distributed (local-scale) approaches. Custom-written ArcMap extensions are used to compare centralised versus distributed approaches, using shortest-path routing based on expected road speed. Results are intersected with 1-kilometre grids and census geographies for meaningful maps of cumulative impact. Case studies are described for two counties in the United Kingdom (UK); Cornwall and Warwickshire. For both case study areas, centralised scenarios generate more traffic, fuel costs and emitted carbon per tonne of MSW processed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a novel technique for optical liquid level sensing. The technique takes advantage of an optical spectrum spreading technique and directly measures liquid level with a digital format. The performance of the sensor does not suffer from changes of environmental variables and system variables. Due to its distinct measurement principle both high resolution and a large measurement range can be achieved simultaneously.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents an investigation into the application of methods of uncertain reasoning to the biological classification of river water quality. Existing biological methods for reporting river water quality are critically evaluated, and the adoption of a discrete biological classification scheme advocated. Reasoning methods for managing uncertainty are explained, in which the Bayesian and Dempster-Shafer calculi are cited as primary numerical schemes. Elicitation of qualitative knowledge on benthic invertebrates is described. The specificity of benthic response to changes in water quality leads to the adoption of a sensor model of data interpretation, in which a reference set of taxa provide probabilistic support for the biological classes. The significance of sensor states, including that of absence, is shown. Novel techniques of directly eliciting the required uncertainty measures are presented. Bayesian and Dempster-Shafer calculi were used to combine the evidence provided by the sensors. The performance of these automatic classifiers was compared with the expert's own discrete classification of sampled sites. Variations of sensor data weighting, combination order and belief representation were examined for their effect on classification performance. The behaviour of the calculi under evidential conflict and alternative combination rules was investigated. Small variations in evidential weight and the inclusion of evidence from sensors absent from a sample improved classification performance of Bayesian belief and support for singleton hypotheses. For simple support, inclusion of absent evidence decreased classification rate. The performance of Dempster-Shafer classification using consonant belief functions was comparable to Bayesian and singleton belief. Recommendations are made for further work in biological classification using uncertain reasoning methods, including the combination of multiple-expert opinion, the use of Bayesian networks, and the integration of classification software within a decision support system for water quality assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hierarchical knowledge structures are frequently used within clinical decision support systems as part of the model for generating intelligent advice. The nodes in the hierarchy inevitably have varying influence on the decisionmaking processes, which needs to be reflected by parameters. If the model has been elicited from human experts, it is not feasible to ask them to estimate the parameters because there will be so many in even moderately-sized structures. This paper describes how the parameters could be obtained from data instead, using only a small number of cases. The original method [1] is applied to a particular web-based clinical decision support system called GRiST, which uses its hierarchical knowledge to quantify the risks associated with mental-health problems. The knowledge was elicited from multidisciplinary mental-health practitioners but the tree has several thousand nodes, all requiring an estimation of their relative influence on the assessment process. The method described in the paper shows how they can be obtained from about 200 cases instead. It greatly reduces the experts’ elicitation tasks and has the potential for being generalised to similar knowledge-engineering domains where relative weightings of node siblings are part of the parameter space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – This paper aims to develop an integrated analytical approach, combining quality function deployment (QFD) and analytic hierarchy process (AHP) approach, to enhance the effectiveness of sourcing decisions. Design/methodology/approach – In the approach, QFD is used to translate the company stakeholder requirements into multiple evaluating factors for supplier selection, which are used to benchmark the suppliers. AHP is used to determine the importance of evaluating factors and preference of each supplier with respect to each selection criterion. Findings – The effectiveness of the proposed approach is demonstrated by applying it to a UK-based automobile manufacturing company. With QFD, the evaluating factors are related to the strategic intent of the company through the involvement of concerned stakeholders. This ensures successful strategic sourcing. The application of AHP ensures consistent supplier performance measurement using benchmarking approach. Research limitations/implications – The proposed integrated approach can be principally adopted in other decision-making scenarios for effective management of the supply chain. Practical implications – The proposed integrated approach can be used as a group-based decision support system for supplier selection, in which all relevant stakeholders are involved to identify various quantitative and qualitative evaluating criteria, and their importance. Originality/value – Various approaches that can deal with multiple and conflicting criteria have been adopted for the supplier selection. However, they fail to consider the impact of business objectives and the requirements of company stakeholders in the identification of evaluating criteria for strategic supplier selection. The proposed integrated approach outranks the conventional approaches to supplier selection and supplier performance measurement because the sourcing strategy and supplier selection are derived from the corporate/business strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the context of sustainability in operations management the aim of this paper is to investigate the environmental initiatives and decisions of a British manufacturer of luxury cars. Through case study research, our investigation sheds light on why and how the company is taking environmental decisions for manufacturing, the origin of ideas for environmental improvement, and how they are measuring environmental performance. The knowledge contributions are in the field of sustainability in operations management, mostly related to environmental decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental law increasingly provides for participatory rights, including appeal rights, to ensure informed, legitimate decision-making. Despite consensus around the general need for participatory rights, including strong ones such as a right to appeal, public participation in environmental decision-making is often criticised. The critics' main argument is that the negative side effects resulting particularly from the use of strong participatory rights outweigh their benefits. Recent regulatory trends arising from better regulation policy to make environmental decision-making more cost-efficient tend to pay special attention to such arguments despite limited empirical evidence. This article provides evidence using material-concerning appeals against pollution permits in Finland and suggests that judicial review is a necessary and effective process for both protecting citizens' rights and improving the quality of environmental protection. © The Author [2008]. Published by Oxford University Press. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Failure to detect patients at risk of attempting suicide can result in tragic consequences. Identifying risks earlier and more accurately helps prevent serious incidents occurring and is the objective of the GRiST clinical decision support system (CDSS). One of the problems it faces is high variability in the type and quantity of data submitted for patients, who are assessed in multiple contexts along the care pathway. Although GRiST identifies up to 138 patient cues to collect, only about half of them are relevant for any one patient and their roles may not be for risk evaluation but more for risk management. This paper explores the data collection behaviour of clinicians using GRiST to see whether it can elucidate which variables are important for risk evaluations and when. The GRiST CDSS is based on a cognitive model of human expertise manifested by a sophisticated hierarchical knowledge structure or tree. This structure is used by the GRiST interface to provide top-down controlled access to the patient data. Our research explores relationships between the answers given to these higher-level 'branch' questions to see whether they can help direct assessors to the most important data, depending on the patient profile and assessment context. The outcome is a model for dynamic data collection driven by the knowledge hierarchy. It has potential for improving other clinical decision support systems operating in domains with high dimensional data that are only partially collected and in a variety of combinations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper describes a learning-oriented interactive method for solving linear mixed integer problems of multicriteria optimization. The method increases the possibilities of the decision maker (DM) to describe his/her local preferences and at the same time it overcomes some computational difficulties, especially in problems of large dimension. The method is realized in an experimental decision support system for finding the solution of linear mixed integer multicriteria optimization problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dimensionality reduction is a very important step in the data mining process. In this paper, we consider feature extraction for classification tasks as a technique to overcome problems occurring because of “the curse of dimensionality”. Three different eigenvector-based feature extraction approaches are discussed and three different kinds of applications with respect to classification tasks are considered. The summary of obtained results concerning the accuracy of classification schemes is presented with the conclusion about the search for the most appropriate feature extraction method. The problem how to discover knowledge needed to integrate the feature extraction and classification processes is stated. A decision support system to aid in the integration of the feature extraction and classification processes is proposed. The goals and requirements set for the decision support system and its basic structure are defined. The means of knowledge acquisition needed to build up the proposed system are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The traditional approach to crisis management suggest autocratic leadership, that has risks anyway (leader is the bottle-neck of problem solving, single-loop learning, crisis management is a matter of efficiency). However, managing nowadays crisis is rather effectiveness issue, and requires double-loop learning (second-order change) and leadership role in the sense of Kotter’s theory. Paper discusses the top-management’s leadership responsibilities, and their special tasks in the problem solving process of change. Inappropriate perception of leadership responsibilities and insisting upon first-order change strategy results in becoming part of the problem, rather that part of the solution of the problem.