882 resultados para Project 2002-005-C : Decision Support Tools for Concrete Infrastructure Rehabilitation
Resumo:
Artifact selection decisions typically involve the selection of one from a number of possible/candidate options (decision alternatives). In order to support such decisions, it is important to identify and recognize relevant key issues of problem solving and decision making (Albers, 1996; Harris, 1998a, 1998b; Jacobs & Holten, 1995; Loch & Conger, 1996; Rumble, 1991; Sauter, 1999; Simon, 1986). Sauter classifies four problem solving/decision making styles: (1) left-brain style, (2) right-brain style, (3) accommodating, and (4) integrated (Sauter, 1999). The left-brain style employs analytical and quantitative techniques and relies on rational and logical reasoning. In an effort to achieve predictability and minimize uncertainty, problems are explicitly defined, solution methods are determined, orderly information searches are conducted, and analysis is increasingly refined. Left-brain style decision making works best when it is possible to predict/control, measure, and quantify all relevant variables, and when information is complete. In direct contrast, right-brain style decision making is based on intuitive techniques—it places more emphasis on feelings than facts. Accommodating decision makers use their non-dominant style when they realize that it will work best in a given situation. Lastly, integrated style decision makers are able to combine the left- and right-brain styles—they use analytical processes to filter information and intuition to contend with uncertainty and complexity.
Resumo:
Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.
Resumo:
One of the aims of the Science and Technology Committee (STC) of the Group on Earth Observations (GEO) was to establish a GEO Label- a label to certify geospatial datasets and their quality. As proposed, the GEO Label will be used as a value indicator for geospatial data and datasets accessible through the Global Earth Observation System of Systems (GEOSS). It is suggested that the development of such a label will significantly improve user recognition of the quality of geospatial datasets and that its use will help promote trust in datasets that carry the established GEO Label. Furthermore, the GEO Label is seen as an incentive to data providers. At the moment GEOSS contains a large amount of data and is constantly growing. Taking this into account, a GEO Label could assist in searching by providing users with visual cues of dataset quality and possibly relevance; a GEO Label could effectively stand as a decision support mechanism for dataset selection. Currently our project - GeoViQua, - together with EGIDA and ID-03 is undertaking research to define and evaluate the concept of a GEO Label. The development and evaluation process will be carried out in three phases. In phase I we have conducted an online survey (GEO Label Questionnaire) to identify the initial user and producer views on a GEO Label or its potential role. In phase II we will conduct a further study presenting some GEO Label examples that will be based on Phase I. We will elicit feedback on these examples under controlled conditions. In phase III we will create physical prototypes which will be used in a human subject study. The most successful prototypes will then be put forward as potential GEO Label options. At the moment we are in phase I, where we developed an online questionnaire to collect the initial GEO Label requirements and to identify the role that a GEO Label should serve from the user and producer standpoint. The GEO Label Questionnaire consists of generic questions to identify whether users and producers believe a GEO Label is relevant to geospatial data; whether they want a single "one-for-all" label or separate labels that will serve a particular role; the function that would be most relevant for a GEO Label to carry; and the functionality that users and producers would like to see from common rating and review systems they use. To distribute the questionnaire, relevant user and expert groups were contacted at meetings or by email. At this stage we successfully collected over 80 valid responses from geospatial data users and producers. This communication will provide a comprehensive analysis of the survey results, indicating to what extent the users surveyed in Phase I value a GEO Label, and suggesting in what directions a GEO Label may develop. Potential GEO Label examples based on the results of the survey will be presented for use in Phase II.
Resumo:
In India, more than one third of the population do not currently have access to modern energy services. Biomass to energy, known as bioenergy, has immense potential for addressing India’s energy poverty. Small scale decentralised bioenergy systems require low investment compared to other renewable technologies and have environmental and social benefits over fossil fuels. Though they have historically been promoted in India through favourable policies, many studies argue that the sector’s potential is underutilised due to sustainable supply chain barriers. Moreover, a significant research gap exists. This research addresses the gap by analysing the potential sustainable supply chain risks of decentralised small scale bioenergy projects. This was achieved through four research objectives, using various research methods along with multiple data collection techniques. Firstly, a conceptual framework was developed to identify and analyse these risks. The framework is founded on existing literature and gathered inputs from practitioners and experts. Following this, sustainability and supply chain issues within the sector were explored. Sustainability issues were collated into 27 objectives, and supply chain issues were categorised according to related processes. Finally, the framework was validated against an actual bioenergy development in Jodhpur, India. Applying the framework to the action research project had some significant impacts upon the project’s design. These include the development of water conservation arrangements, the insertion of auxiliary arrangements, measures to increase upstream supply chain resilience, and the development of a first aid action plan. More widely, the developed framework and identified issues will help practitioners to take necessary precautionary measures and address them quickly and cost effectively. The framework contributes to the bioenergy decision support system literature and the sustainable supply chain management field by incorporating risk analysis and introducing the concept of global and organisational sustainability in supply chains. The sustainability issues identified contribute to existing knowledge through the exploration of a small scale and developing country context. The analysis gives new insights into potential risks affecting the whole bioenergy supply chain.
Resumo:
OpenMI is a widely used standard allowing exchange of data between integrated models, which has mostly been applied to dynamic, deterministic models. Within the FP7 UncertWeb project we are developing mechanisms and tools to support the management of uncertainty in environmental models. In this paper we explore the integration of the UncertWeb framework with OpenMI, to assess the issues that arise when propagating uncertainty in OpenMI model compositions, and the degree of integration possible with UncertWeb tools. In particular we develop an uncertainty-enabled model for a simple Lotka-Volterra system with an interface conforming to the OpenMI standard, exploring uncertainty in the initial predator and prey levels, and the parameters of the model equations. We use the Elicitator tool developed within UncertWeb to identify the initial condition uncertainties, and show how these can be integrated, using UncertML, with simple Monte Carlo propagation mechanisms. The mediators we develop for OpenMI models are generic and produce standard Web services that expose the OpenMI models to a Web based framework. We discuss what further work is needed to allow a more complete system to be developed and show how this might be used practically.
Resumo:
One of the main challenges of classifying clinical data is determining how to handle missing features. Most research favours imputing of missing values or neglecting records that include missing data, both of which can degrade accuracy when missing values exceed a certain level. In this research we propose a methodology to handle data sets with a large percentage of missing values and with high variability in which particular data are missing. Feature selection is effected by picking variables sequentially in order of maximum correlation with the dependent variable and minimum correlation with variables already selected. Classification models are generated individually for each test case based on its particular feature set and the matching data values available in the training population. The method was applied to real patients' anonymous mental-health data where the task was to predict the suicide risk judgement clinicians would give for each patient's data, with eleven possible outcome classes: zero to ten, representing no risk to maximum risk. The results compare favourably with alternative methods and have the advantage of ensuring explanations of risk are based only on the data given, not imputed data. This is important for clinical decision support systems using human expertise for modelling and explaining predictions.
Resumo:
Failure to detect patients at risk of attempting suicide can result in tragic consequences. Identifying risks earlier and more accurately helps prevent serious incidents occurring and is the objective of the GRiST clinical decision support system (CDSS). One of the problems it faces is high variability in the type and quantity of data submitted for patients, who are assessed in multiple contexts along the care pathway. Although GRiST identifies up to 138 patient cues to collect, only about half of them are relevant for any one patient and their roles may not be for risk evaluation but more for risk management. This paper explores the data collection behaviour of clinicians using GRiST to see whether it can elucidate which variables are important for risk evaluations and when. The GRiST CDSS is based on a cognitive model of human expertise manifested by a sophisticated hierarchical knowledge structure or tree. This structure is used by the GRiST interface to provide top-down controlled access to the patient data. Our research explores relationships between the answers given to these higher-level 'branch' questions to see whether they can help direct assessors to the most important data, depending on the patient profile and assessment context. The outcome is a model for dynamic data collection driven by the knowledge hierarchy. It has potential for improving other clinical decision support systems operating in domains with high dimensional data that are only partially collected and in a variety of combinations.
Resumo:
Crowdsourcing platforms that attract a large pool of potential workforce allow organizations to reduce permanent staff levels. However managing this "human cloud" requires new management models and skills. Therefore, Information Technology (IT) service providers engaging in crowdsourcing need to develop new capabilities to successfully utilize crowdsourcing in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large multinational technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end-client. This paper expands the research on vendor capabilities and IT outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing. © 2014 Elsevier B.V. All rights reserved.
Resumo:
An approach of building distributed decision support systems is proposed. There is defined a framework of a distributed DSS and examined questions of problem formulation and solving using artificial intellectual agents in system core.
Resumo:
The paper presents a study that focuses on the issue of sup-porting educational experts to choose the right combination of educational methodology and technology tools when designing training and learning programs. It is based on research in the field of adaptive intelligent e-learning systems. The object of study is the professional growth of teachers in technology and in particular that part of their qualification which is achieved by organizing targeted training of teachers. The article presents the process of creating and testing a system to support the decision on the design of training for teachers, leading to more effective implementation of technology in education and integration in diverse educational contexts. ACM Computing Classification System (1998): H.4.2, I.2.1, I.2, I.2.4, F.4.1.
Resumo:
This paper presents a survey of the existing services provided by the digital libraries and repositories on mathematics of the content provider partners in the EuDML project. The purpose is to support the development of the concepts, criteria and methods for the continuous evaluation of these and new relevant existing services. The work was concentrated on the classification of the relevant services in order to specify a common evaluating structure.
Resumo:
Most authors assume that the natural behaviour of the decision-maker is being inconsistent. This paper investigates the main sources of inconsistency and analyses methods for reducing or eliminating inconsistency. Decision support systems can contain interactive modules for that purpose. In a system with consistency control, there are three stages. First, consistency should be checked: a consistency measure is needed. Secondly, approval or rejection has to be decided: a threshold value of inconsistency measure is needed. Finally, if inconsistency is ‘high’, corrections have to be made: an inconsistency reducing method is needed. This paper reviews the difficulties in all stages. An entirely different approach is to elaborate a decision support system in order to force the decision-maker to give consistent values in each step of answering pair-wise comparison questions. An interactive questioning procedure resulting in consistent (sub) matrices has been demonstrated.
Resumo:
Tanulmányunkban a hazai vállalatok teljesítménymérési és teljesítménymenedzsment gyakorlatát vizsgáljuk a Versenyben a világgal kutatási program 2009. évi felmérése adatainak felhasználásával. Célunk a döntéstámogatás hátterének vizsgálata: a vállalatok teljesítménymérési gyakorlatának jellemzése, konzisztenciájának értékelése, vizsgálva a korábbi (1996, 1999 és 2004 évi hasonló) kutatásaink során megfigyelt tendenciák további alakulását is. A vállalati teljesítménymérés gyakorlatát, a vállalatvezetők által fontosnak/hasznosnak tartott, illetve rendszeresen használt információforrásokat, teljesítménymutatókat, elemzési eszközöket a korábbi kutatásainkhoz kialakított elemzési keret (orientáció, egyensúly, konzisztencia, támogató szerep) felhasználásával értékeltük. Az információs rendszer különböző tevékenységeket támogató szerepének az értékelése során a különböző területekért felelős vezetők véleményét is összevetettük, s különböző vállalati jellemzők (vállalatméret, tulajdonosok típusa, fő tevékenység stb.) sajátosságait is vizsgáltuk. ___________ The paper analyses the performance measurement and performance management practice of Hungarian companies, based on the data of the Competitiveness research program (2009). Our goal was to evaluate the practice from the point of view of decision support, based on our previous framework, evaluating the orientation, the balance, the consistency and the supporting role of the performance measurement practice.
Resumo:
This thesis develops and validates the framework of a specialized maintenance decision support system for a discrete part manufacturing facility. Its construction utilizes a modular approach based on the fundamental philosophy of Reliability Centered Maintenance (RCM). The proposed architecture uniquely integrates System Decomposition, System Evaluation, Failure Analysis, Logic Tree Analysis, and Maintenance Planning modules. It presents an ideal solution to the unique maintenance inadequacies of modern discrete part manufacturing systems. Well established techniques are incorporated as building blocks of the system's modules. These include Failure Mode Effect and Criticality Analysis (FMECA), Logic Tree Analysis (LTA), Theory of Constraints (TOC), and an Expert System (ES). A Maintenance Information System (MIS) performs the system's support functions. Validation was performed by field testing of the system at a Miami based manufacturing facility. Such a maintenance support system potentially reduces downtime losses and contributes to higher product quality output. Ultimately improved profitability is the final outcome. ^
Resumo:
The redevelopment of Brownfields has taken off in the 1990s, supported by federal and state incentives, and largely accomplished by local initiatives. Brownfields redevelopment has several associated benefits. These include the revitalization of inner-city neighborhoods, creation of jobs, stimulation of tax revenues, greater protection of public health and natural resources, the renewal and reuse existing civil infrastructure and Greenfields protection. While these benefits are numerous, the obstacles to Brownfields redevelopment are also very much alive. Redevelopment issues typically embrace a host of financial and legal liability concerns, technical and economic constraints, competing objectives, and uncertainties arising from inadequate site information. Because the resources for Brownfields redevelopment are usually limited, local programs will require creativity in addressing these existing obstacles in a manner that extends their limited resources for returning Brownfields to productive uses. Such programs may benefit from a structured and defensible decision framework to prioritize sites for redevelopment: one that incorporates the desired objectives, corresponding variables and uncertainties associated with Brownfields redevelopment. This thesis demonstrates the use of a decision analytic tool, Bayesian Influence Diagrams, and related decision analytic tools in developing quantitative decision models to evaluate and rank Brownfields sites on the basis of their redevelopment potential.