76 resultados para Decision-support tools


Relevância:

80.00% 80.00%

Publicador:

Resumo:

There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose – This paper aims to develop an integrated analytical approach, combining quality function deployment (QFD) and analytic hierarchy process (AHP) approach, to enhance the effectiveness of sourcing decisions. Design/methodology/approach – In the approach, QFD is used to translate the company stakeholder requirements into multiple evaluating factors for supplier selection, which are used to benchmark the suppliers. AHP is used to determine the importance of evaluating factors and preference of each supplier with respect to each selection criterion. Findings – The effectiveness of the proposed approach is demonstrated by applying it to a UK-based automobile manufacturing company. With QFD, the evaluating factors are related to the strategic intent of the company through the involvement of concerned stakeholders. This ensures successful strategic sourcing. The application of AHP ensures consistent supplier performance measurement using benchmarking approach. Research limitations/implications – The proposed integrated approach can be principally adopted in other decision-making scenarios for effective management of the supply chain. Practical implications – The proposed integrated approach can be used as a group-based decision support system for supplier selection, in which all relevant stakeholders are involved to identify various quantitative and qualitative evaluating criteria, and their importance. Originality/value – Various approaches that can deal with multiple and conflicting criteria have been adopted for the supplier selection. However, they fail to consider the impact of business objectives and the requirements of company stakeholders in the identification of evaluating criteria for strategic supplier selection. The proposed integrated approach outranks the conventional approaches to supplier selection and supplier performance measurement because the sourcing strategy and supplier selection are derived from the corporate/business strategy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While conventional Data Envelopment Analysis (DEA) models set targets for each operational unit, this paper considers the problem of input/output reduction in a centralized decision making environment. The purpose of this paper is to develop an approach to input/output reduction problem that typically occurs in organizations with a centralized decision-making environment. This paper shows that DEA can make an important contribution to this problem and discusses how DEA-based model can be used to determine an optimal input/output reduction plan. An application in banking sector with limitation in IT investment shows the usefulness of the proposed method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the aims of the Science and Technology Committee (STC) of the Group on Earth Observations (GEO) was to establish a GEO Label- a label to certify geospatial datasets and their quality. As proposed, the GEO Label will be used as a value indicator for geospatial data and datasets accessible through the Global Earth Observation System of Systems (GEOSS). It is suggested that the development of such a label will significantly improve user recognition of the quality of geospatial datasets and that its use will help promote trust in datasets that carry the established GEO Label. Furthermore, the GEO Label is seen as an incentive to data providers. At the moment GEOSS contains a large amount of data and is constantly growing. Taking this into account, a GEO Label could assist in searching by providing users with visual cues of dataset quality and possibly relevance; a GEO Label could effectively stand as a decision support mechanism for dataset selection. Currently our project - GeoViQua, - together with EGIDA and ID-03 is undertaking research to define and evaluate the concept of a GEO Label. The development and evaluation process will be carried out in three phases. In phase I we have conducted an online survey (GEO Label Questionnaire) to identify the initial user and producer views on a GEO Label or its potential role. In phase II we will conduct a further study presenting some GEO Label examples that will be based on Phase I. We will elicit feedback on these examples under controlled conditions. In phase III we will create physical prototypes which will be used in a human subject study. The most successful prototypes will then be put forward as potential GEO Label options. At the moment we are in phase I, where we developed an online questionnaire to collect the initial GEO Label requirements and to identify the role that a GEO Label should serve from the user and producer standpoint. The GEO Label Questionnaire consists of generic questions to identify whether users and producers believe a GEO Label is relevant to geospatial data; whether they want a single "one-for-all" label or separate labels that will serve a particular role; the function that would be most relevant for a GEO Label to carry; and the functionality that users and producers would like to see from common rating and review systems they use. To distribute the questionnaire, relevant user and expert groups were contacted at meetings or by email. At this stage we successfully collected over 80 valid responses from geospatial data users and producers. This communication will provide a comprehensive analysis of the survey results, indicating to what extent the users surveyed in Phase I value a GEO Label, and suggesting in what directions a GEO Label may develop. Potential GEO Label examples based on the results of the survey will be presented for use in Phase II.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In India, more than one third of the population do not currently have access to modern energy services. Biomass to energy, known as bioenergy, has immense potential for addressing India’s energy poverty. Small scale decentralised bioenergy systems require low investment compared to other renewable technologies and have environmental and social benefits over fossil fuels. Though they have historically been promoted in India through favourable policies, many studies argue that the sector’s potential is underutilised due to sustainable supply chain barriers. Moreover, a significant research gap exists. This research addresses the gap by analysing the potential sustainable supply chain risks of decentralised small scale bioenergy projects. This was achieved through four research objectives, using various research methods along with multiple data collection techniques. Firstly, a conceptual framework was developed to identify and analyse these risks. The framework is founded on existing literature and gathered inputs from practitioners and experts. Following this, sustainability and supply chain issues within the sector were explored. Sustainability issues were collated into 27 objectives, and supply chain issues were categorised according to related processes. Finally, the framework was validated against an actual bioenergy development in Jodhpur, India. Applying the framework to the action research project had some significant impacts upon the project’s design. These include the development of water conservation arrangements, the insertion of auxiliary arrangements, measures to increase upstream supply chain resilience, and the development of a first aid action plan. More widely, the developed framework and identified issues will help practitioners to take necessary precautionary measures and address them quickly and cost effectively. The framework contributes to the bioenergy decision support system literature and the sustainable supply chain management field by incorporating risk analysis and introducing the concept of global and organisational sustainability in supply chains. The sustainability issues identified contribute to existing knowledge through the exploration of a small scale and developing country context. The analysis gives new insights into potential risks affecting the whole bioenergy supply chain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of digital games and gamification has demonstrable potential to improve many aspects of how businesses provide training to staff, operate, and communicate with consumers. However, a need still exists for the benefits and potential of adopting games and gamification be effectively communicated to decision-makers across sectors. This article provides a structured review of existing literature on the use of games in the business sector, seeking to consolidate findings to address research questions regarding their perception, proven efficacy, and identify key areas for future work. The findings consolidate evidence showing serious games can have a positive and valuable impact in multiple areas of a business, including training, decision-support, and consumer outreach. They also highlight the challenges and pitfalls of applying serious games and gamification principles within a business context, and discuss the implications of development and evaluation methodologies on the success of a game-based solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the main challenges of classifying clinical data is determining how to handle missing features. Most research favours imputing of missing values or neglecting records that include missing data, both of which can degrade accuracy when missing values exceed a certain level. In this research we propose a methodology to handle data sets with a large percentage of missing values and with high variability in which particular data are missing. Feature selection is effected by picking variables sequentially in order of maximum correlation with the dependent variable and minimum correlation with variables already selected. Classification models are generated individually for each test case based on its particular feature set and the matching data values available in the training population. The method was applied to real patients' anonymous mental-health data where the task was to predict the suicide risk judgement clinicians would give for each patient's data, with eleven possible outcome classes: zero to ten, representing no risk to maximum risk. The results compare favourably with alternative methods and have the advantage of ensuring explanations of risk are based only on the data given, not imputed data. This is important for clinical decision support systems using human expertise for modelling and explaining predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Failure to detect patients at risk of attempting suicide can result in tragic consequences. Identifying risks earlier and more accurately helps prevent serious incidents occurring and is the objective of the GRiST clinical decision support system (CDSS). One of the problems it faces is high variability in the type and quantity of data submitted for patients, who are assessed in multiple contexts along the care pathway. Although GRiST identifies up to 138 patient cues to collect, only about half of them are relevant for any one patient and their roles may not be for risk evaluation but more for risk management. This paper explores the data collection behaviour of clinicians using GRiST to see whether it can elucidate which variables are important for risk evaluations and when. The GRiST CDSS is based on a cognitive model of human expertise manifested by a sophisticated hierarchical knowledge structure or tree. This structure is used by the GRiST interface to provide top-down controlled access to the patient data. Our research explores relationships between the answers given to these higher-level 'branch' questions to see whether they can help direct assessors to the most important data, depending on the patient profile and assessment context. The outcome is a model for dynamic data collection driven by the knowledge hierarchy. It has potential for improving other clinical decision support systems operating in domains with high dimensional data that are only partially collected and in a variety of combinations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Crowdsourcing platforms that attract a large pool of potential workforce allow organizations to reduce permanent staff levels. However managing this "human cloud" requires new management models and skills. Therefore, Information Technology (IT) service providers engaging in crowdsourcing need to develop new capabilities to successfully utilize crowdsourcing in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large multinational technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end-client. This paper expands the research on vendor capabilities and IT outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing. © 2014 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers the problem of low-dimensional visualisation of very high dimensional information sources for the purpose of situation awareness in the maritime environment. In response to the requirement for human decision support aids to reduce information overload (and specifically, data amenable to inter-point relative similarity measures) appropriate to the below-water maritime domain, we are investigating a preliminary prototype topographic visualisation model. The focus of the current paper is on the mathematical problem of exploiting a relative dissimilarity representation of signals in a visual informatics mapping model, driven by real-world sonar systems. A realistic noise model is explored and incorporated into non-linear and topographic visualisation algorithms building on the approach of [9]. Concepts are illustrated using a real world dataset of 32 hydrophones monitoring a shallow-water environment in which targets are present and dynamic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new, dynamic feature representation method for high value parts consisting of complex and intersecting features. The method first extracts features from the CAD model of a complex part. Then the dynamic status of each feature is established between various operations to be carried out during the whole manufacturing process. Each manufacturing and verification operation can be planned and optimized using the real conditions of a feature, thus enhancing accuracy, traceability and process control. The dynamic feature representation is complementary to the design models used as underlining basis in current CAD/CAM and decision support systems. © 2012 CIRP.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: To develop a decision support system (DSS), myGRaCE, that integrates service user (SU) and practitioner expertise about mental health and associated risks of suicide, self-harm, harm to others, self-neglect, and vulnerability. The intention is to help SUs assess and manage their own mental health collaboratively with practitioners. Methods: An iterative process involving interviews, focus groups, and agile software development with 115 SUs, to elicit and implement myGRaCE requirements. Results: Findings highlight shared understanding of mental health risk between SUs and practitioners that can be integrated within a single model. However, important differences were revealed in SUs' preferred process of assessing risks and safety, which are reflected in the distinctive interface, navigation, tool functionality and language developed for myGRaCE. A challenge was how to provide flexible access without overwhelming and confusing users. Conclusion: The methods show that practitioner expertise can be reformulated in a format that simultaneously captures SU expertise, to provide a tool highly valued by SUs. A stepped process adds necessary structure to the assessment, each step with its own feedback and guidance. Practice Implications: The GRiST web-based DSS (www.egrist.org) links and integrates myGRaCE self-assessments with GRiST practitioner assessments for supporting collaborative and self-managed healthcare.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of digital games and gamification has demonstrated potential to improve many aspects of how businesses provide training to staff, and communicate with consumers. However, there is still a need for better understanding of how the adoption of games and gasification would influence the process of decision-making in organisations across different industry. This article provides a structured review of existing literature on the use of games in the business environment, and seeks to consolidate findings to address research questions regarding their perception, proven efficacy, and identifies key areas for future work. The findings highlight that serious games can have positive and effective impacts in multiple areas of a business, including training, decision-support, and consumer outreach. They also emphasise the challenges and pitfalls of applying serious games and gamification principles within a business context, and discuss the implications of development and evaluation methodologies on the success of a game-based solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction: There is increasing evidence that electronic prescribing (ePrescribing) or computerised provider/physician order entry (CPOE) systems can improve the quality and safety of healthcare services. However, it has also become clear that their implementation is not straightforward and may create unintended or undesired consequences once in use. In this context, qualitative approaches have been particularly useful and their interpretative synthesis could make an important and timely contribution to the field. This review will aim to identify, appraise and synthesise qualitative studies on ePrescribing/CPOE in hospital settings, with or without clinical decision support. Methods and analysis: Data sources will include the following bibliographic databases: MEDLINE, MEDLINE In Process, EMBASE, PsycINFO, Social Policy and Practice via Ovid, CINAHL via EBSCO, The Cochrane Library (CDSR, DARE and CENTRAL databases), Nursing and Allied Health Sources, Applied Social Sciences Index and Abstracts via ProQuest and SCOPUS. In addition, other sources will be searched for ongoing studies (ClinicalTrials.gov) and grey literature: Healthcare Management Information Consortium, Conference Proceedings Citation Index (Web of Science) and Sociological abstracts. Studies will be independently screened for eligibility by 2 reviewers. Qualitative studies, either standalone or in the context of mixed-methods designs, reporting the perspectives of any actors involved in the implementation, management and use of ePrescribing/CPOE systems in hospital-based care settings will be included. Data extraction will be conducted by 2 reviewers using a piloted form. Quality appraisal will be based on criteria from the Critical Appraisal Skills Programme checklist and Standards for Reporting Qualitative Research. Studies will not be excluded based on quality assessment. A postsynthesis sensitivity analysis will be undertaken. Data analysis will follow the thematic synthesis method. Ethics and dissemination: The study does not require ethical approval as primary data will not be collected. The results of the study will be published in a peer-reviewed journal and presented at relevant conferences.