825 resultados para decision support tool
Resumo:
This thesis presents a sequential pattern based model (PMM) to detect news topics from a popular microblogging platform, Twitter. PMM captures key topics and measures their importance using pattern properties and Twitter characteristics. This study shows that PMM outperforms traditional term-based models, and can potentially be implemented as a decision support system. The research contributes to news detection and addresses the challenging issue of extracting information from short and noisy text.
Resumo:
In order to dynamically reduce voltage unbalance along a low voltage distribution feeder, a smart residential load transfer system is discussed. In this scheme, residential loads can be transferred from one phase to another to minimize the voltage unbalance along the feeder. Each house is supplied through a static transfer switch and a controller. The master controller, installed at the transformer, observes the power consumption in each house and will determine which house(s) should be transferred from an initially connected phase to another in order to keep the voltage unbalance minimum. The performance of the smart load transfer scheme is demonstrated by simulations.
Resumo:
A novel intelligent online demand management system is discussed in this chapter for peak load management in low voltage residential distribution networks based on the smart grid concept. The discussed system also regulates the network voltage, balances the power in three phases and coordinates the energy storage within the network. This method uses low cost controllers, with two-way communication interfaces, installed in costumers’ premises and at distribution transformers to manage the peak load while maximizing customer satisfaction. A multi-objective decision making process is proposed to select the load(s) to be delayed or controlled. The efficacy of the proposed control system is verified by a MATLAB-based simulation which includes detailed modeling of residential loads and the network.
Resumo:
An important aspect of decision support systems involves applying sophisticated and flexible statistical models to real datasets and communicating these results to decision makers in interpretable ways. An important class of problem is the modelling of incidence such as fire, disease etc. Models of incidence known as point processes or Cox processes are particularly challenging as they are ‘doubly stochastic’ i.e. obtaining the probability mass function of incidents requires two integrals to be evaluated. Existing approaches to the problem either use simple models that obtain predictions using plug-in point estimates and do not distinguish between Cox processes and density estimation but do use sophisticated 3D visualization for interpretation. Alternatively other work employs sophisticated non-parametric Bayesian Cox process models, but do not use visualization to render interpretable complex spatial temporal forecasts. The contribution here is to fill this gap by inferring predictive distributions of Gaussian-log Cox processes and rendering them using state of the art 3D visualization techniques. This requires performing inference on an approximation of the model on a discretized grid of large scale and adapting an existing spatial-diurnal kernel to the log Gaussian Cox process context.
Resumo:
The enabling role of Information technology (IT) makes it a critical resource to invest in to achieve higher economic growth. Consequently, the pervasive use of IT amongst organizations in developing countries is gaining rapid momentum. Today, IT is no longer a support tool; it is a strategic asset that fosters sustainable competitive advantage and a driver for improved business performance. At the national level, the effective use of IT drives economic performance and social transformation. This makes IT resources a revolutionizing mechanism that is capable of bringing efficiency to all levels of the economy. But, evolution in IT is occuring at a very rapid pace. Despite the many opportunities that arise from these new developments, there is a growing concern that such rapid innovations can be detrimental to the environment. This situation puts a critical question on the table – Is Your IT Green?
Resumo:
Aims Pathology notification for a Cancer Registry is regarded as the most valid information for the confirmation of a diagnosis of cancer. In view of the importance of pathology data, an automatic medical text analysis system (Medtex) is being developed to perform electronic Cancer Registry data extraction and coding of important clinical information embedded within pathology reports. Methods The system automatically scans HL7 messages received from a Queensland pathology information system and analyses the reports for terms and concepts relevant to a cancer notification. A multitude of data items for cancer notification such as primary site, histological type, stage, and other synoptic data are classified by the system. The underlying extraction and classification technology is based on SNOMED CT1 2. The Queensland Cancer Registry business rules3 and International Classification of Diseases – Oncology – Version 34 have been incorporated. Results The cancer notification services show that the classification of notifiable reports can be achieved with sensitivities of 98% and specificities of 96%5, while the coding of cancer notification items such as basis of diagnosis, histological type and grade, primary site and laterality can be extracted with an overall accuracy of 80%6. In the case of lung cancer staging, the automated stages produced were accurate enough for the purposes of population level research and indicative staging prior to multi-disciplinary team meetings2 7. Medtex also allows for detailed tumour stream synoptic reporting8. Conclusions Medtex demonstrates how medical free-text processing could enable the automation of some Cancer Registry processes. Over 70% of Cancer Registry coding resources are devoted to information acquisition. The development of a clinical decision support system to unlock information from medical free-text could significantly reduce costs arising from duplicated processes and enable improved decision support, enhancing efficiency and timeliness of cancer information for Cancer Registries.
Resumo:
This Perspective reflects on the withdrawal of the Liverpool Care Pathway in the UK, and its implications for Australia. Integrated care pathways are documents which outline the essential steps of multidisciplinary care in addressing a specific clinical problem. They can be used to introduce best clinical practice, to ensure that the most appropriate management occurs at the most appropriate time and that it is provided by the most appropriate health professional. By providing clear instructions, decision support and a framework for clinician-patient interactions, care pathways guide the systematic provision of best evidence-based care. The Liverpool Care Pathway (LCP) is an example of an integrated care pathway, designed in the 1990s to guide care for people with cancer who are in their last days of life and are expected to die in hospital. This pathway evolved out of a recognised local need to better support non-specialist palliative care providers’ care for patients dying of cancer within their inpatient units. Historically, despite the large number of people in acute care settings whose treatment intent is palliative, dying patients receiving general hospital acute care tended to lack sufficient attention from senior medical staff and nursing staff. The quality of end-of-life care was considered inadequate, therefore much could be learned from the way patients were cared for by palliative care services. The LCP was a strategy developed to improve end-of-life care in cancer patients and was based on the care received by those dying in the palliative care setting.
Resumo:
This project provides a costed and appraised set of management strategies for mitigating threats to species of conservation significance in the Pilbara IBRA bioregion of Western Australia (hereafter 'the Pilbara'). Conservation significant species are either listed under federal and state legislation, international agreements or considered likely to be threatened in the next 20 years. Here we report on the 17 technically and socially feasible management strategies, which were drawn from the collective experience and knowledge of 49 experts and stakeholders in the ecology and management of the Pilbara region. We determine the relative ecological cost-effectiveness of each strategy, calculated as the expected benefit of management to the persistence of 53 key threatened native fauna and flora species, divided by the expected cost of management. Finally we provide decision support to assist prioritisation of the strategies on the basis of ecological cost-effectiveness.
Resumo:
Objective To describe women’s reports of the model of care options General Practitioners (GPs) discussed with them at the first pregnancy consultation and women’s self-reported role in decisionmaking about model of care. Methods Women who had recently given birth responded to survey items about the models of care GPs discussed, their role in final decision-making, and socio-demographic, obstetric history, and early pregnancy characteristics. Results The proportion of women with whom each model of care was discussed varied between 8.2% (for private midwifery care with home birth) and 64.4% (GP shared care). Only 7.7% of women reported that all seven models were discussed. Exclusive discussion about private obstetric care and about all public models was common, and women’s health insurance status was the strongest predictor of the presence of discussions about each model. Most women (82.6%) reported active involvement in final decision-making about model of care. Conclusion Although most women report involvement in maternity model of care decisions, they remain largely uninformed about the breadth of available model of care options. Practical implications Strategies that facilitate women’s access to information on the differentiating features and outcomes for all models of care should be prioritized to better ensure equitable and quality decisions.
Resumo:
Effective response by government and individuals to the risk of land degradation requires an understanding of regional climate variations and the impacts of climate and management on condition and productivity of land and vegetation resources. Analysis of past land degradation and climate variability provides some understanding of vulnerability to current and future climate changes and the information needs for more sustainable management. We describe experience in providing climate risk assessment information for managing for the risk of land degradation in north-eastern Australian arid and semi-arid regions used for extensive grazing. However, we note that information based on historical climate variability, which has been relied on in the past, will now also have to factor in the influence of human-induced climate change. Examples illustrate trends in climate for Australia over the past decade and the impacts on indicators of resource condition. The analysis highlights the benefits of insights into past trends and variability in rainfall and other climate variables based on extended historic databases. This understanding in turn supports more reliable regional climate projections and decision support information for governments and land managers to better manage the risk of land degradation now and in the future.
Resumo:
This paper details the implementation and trialling of a prototype in-bucket bulk density monitor on a production dragline. Bulk density information can provide feedback to mine planning and scheduling to improve blasting and consequently facilitating optimal bucket sizing. The bulk density measurement builds upon outcomes presented in the AMTC2009 paper titled ‘Automatic In-Bucket Volume Estimation for Dragline Operations’ and utilises payload information from a commercial dragline monitor. While the previous paper explains the algorithms and theoretical basis for the system design and scaled model testing this paper will focus on the full scale implementation and the challenges involved.
Resumo:
Rating systems are used by many websites, which allow customers to rate available items according to their own experience. Subsequently, reputation models are used to aggregate available ratings in order to generate reputation scores for items. A problem with current reputation models is that they provide solutions to enhance accuracy of sparse datasets not thinking of their models performance over dense datasets. In this paper, we propose a novel reputation model to generate more accurate reputation scores for items using any dataset; whether it is dense or sparse. Our proposed model is described as a weighted average method, where the weights are generated using the normal distribution. Experiments show promising results for the proposed model over state-of-the-art ones on sparse and dense datasets.
Resumo:
Many websites offer the opportunity for customers to rate items and then use customers' ratings to generate items reputation, which can be used later by other users for decision making purposes. The aggregated value of the ratings per item represents the reputation of this item. The accuracy of the reputation scores is important as it is used to rank items. Most of the aggregation methods didn't consider the frequency of distinct ratings and they didn't test how accurate their reputation scores over different datasets with different sparsity. In this work we propose a new aggregation method which can be described as a weighted average, where weights are generated using the normal distribution. The evaluation result shows that the proposed method outperforms state-of-the-art methods over different sparsity datasets.
Resumo:
Twitter is a very popular social network website that allows users to publish short posts called tweets. Users in Twitter can follow other users, called followees. A user can see the posts of his followees on his Twitter profile home page. An information overload problem arose, with the increase of the number of followees, related to the number of tweets available in the user page. Twitter, similar to other social network websites, attempts to elevate the tweets the user is expected to be interested in to increase overall user engagement. However, Twitter still uses the chronological order to rank the tweets. The tweets ranking problem was addressed in many current researches. A sub-problem of this problem is to rank the tweets for a single followee. In this paper we represent the tweets using several features and then we propose to use a weighted version of the famous voting system Borda-Count (BC) to combine several ranked lists into one. A gradient descent method and collaborative filtering method are employed to learn the optimal weights. We also employ the Baldwin voting system for blending features (or predictors). Finally we use the greedy feature selection algorithm to select the best combination of features to ensure the best results.
Resumo:
Reputation systems are employed to provide users with advice on the quality of items on the Web, based on the aggregated value of user-based ratings. Recommender systems are used online to suggest items to users according to the users, expressed preferences. Yet, recommender systems will endorse an item regardless of its reputation value. In this paper, we report the incorporation of reputation models into recommender systems to enhance the accuracy of recommendations. The proposed method separates the implementation of recommender and reputation systems for generality. Our experiment showed that the proposed method could enhance the accuracy of existing recommender systems.