874 resultados para decision support systems, GIS, interpolation, multiple regression
Resumo:
Aquesta tesi presenta un projecte de gestió integral d'infraestructures hidràuliques de sanejament a la Conca del riu Besòs. S'han considerat dos sistemes de sanejament (La Garriga i Granollers) amb les seves respectives xarxes de clavegueram i Estacions Depuradores d'Aigües Residuals (EDAR), i un tram del riu Congost, afluent del Besòs, com a medi receptor de les seves aigües residuals. Amb aquesta finalitat es construeix i s'utilitza un Sistema de Suport a la Decisió Ambiental (SSDA). Aquesta eina incorpora l'ús de models de simulació de qualitat de l'aigua pels sistemes de clavegueram, EDAR i riu, com a forma d'extracció de coneixement sobre la gestió integrada d'aquests elements. Aquest coneixement es conceptualitza, posteriorment, en forma d'arbres de decisió, que proporcionaran a l'usuari les actuacions a realitzar davant de les diferents situacions reals de gestió diària.
Resumo:
The effect of multiple sclerosis (MS) on the ability to identify emotional expressions in faces was investigated, and possible associations with patients’ characteristics were explored. 56 non-demented MS patients and 56 healthy subjects (HS) with similar demographic characteristics performed an emotion recognition task (ERT), the Benton Facial Recognition Test (BFRT), and answered the Hospital Anxiety and Depression Scale (HADS). Additionally, MS patients underwent a neurological examination and a comprehensive neuropsychological evaluation. The ERT consisted of 42 pictures of faces (depicting anger, disgust, fear, happiness, sadness, surprise and neutral expressions) from the NimStim set. An iViewX high-speed eye tracker was used to record eye movements during ERT. The fixation times were calculated for two regions of interest (i.e., eyes and rest of the face). No significant differences were found between MS and HC on ERT’s behavioral and oculomotor measures. Bivariate and multiple regression analyses revealed significant associations between ERT’s behavioral performance and demographic, clinical, psychopathological, and cognitive measures.
Resumo:
As soluções informáticas de Customer Relationship Management (CRM) e os sistemas de suporte à informação, designados por Business Intelligence (BI), permitem a recolha de dados e a sua transformação em informação e em conhecimento, vital para diferenciação das organizações num Mundo globalizado e em constante mudança. A construção de um Data Warehouse corporativo é fundamental para as organizações que utilizam vários sistemas operacionais de modo a ser possível a agregação da informação. A Fundação INATEL – uma fundação privada de interesse público, 100% estatal – é um exemplo deste tipo de organização. Com uma base de dados de clientes superior a 250.000, atuando em áreas tão diferentes como sejam o Turismo, a Cultura e o Desporto, sustentado em mais de 25 sistemas informáticos autónomos. A base de estudo deste trabalho é a procura de identificação dos benefícios da implementação de um CRM Analítico na Fundação INATEL. Apresentando-se assim uma metodologia para a respetiva implementação e sugestão de um modelo de dados para a obtenção de uma visão única do cliente, acessível a toda a organização, de modo a garantir a total satisfação e consequente fidelização à marca INATEL. A disponibilização desta informação irá proporcionar um posicionamento privilegiado da Fundação INATEL e terá um papel fundamental na sua sustentabilidade económica.
Resumo:
A multi-scale framework for decision support is presented that uses a combination of experiments, models, communication, education and decision support tools to arrive at a realistic strategy to minimise diffuse pollution. Effective partnerships between researchers and stakeholders play a key part in successful implementation of this strategy. The Decision Support Matrix (DSM) is introduced as a set of visualisations that can be used at all scales, both to inform decision making and as a communication tool in stakeholder workshops. A demonstration farm is presented and one of its fields is taken as a case study. Hydrological and nutrient flow path models are used for event based simulation (TOPCAT), catchment scale modelling (INCA) and field scale flow visualisation (TopManage). One of the DSMs; The Phosphorus Export Risk Matrix (PERM) is discussed in detail. The PERM was developed iteratively as a point of discussion in stakeholder workshops, as a decision support and education tool. The resulting interactive PERM contains a set of questions and proposed remediation measures that reflect both expert and local knowledge. Education and visualisation tools such as GIS, risk indicators, TopManage and the PERM are found to be invaluable in communicating improved farming practice to stakeholders. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Previous work has established the value of goal-oriented approaches to requirements engineering. Achieving clarity and agreement about stakeholders’ goals and assumptions is critical for building successful software systems and managing their subsequent evolution. In general, this decision-making process requires stakeholders to understand the implications of decisions outside the domains of their own expertise. Hence it is important to support goal negotiation and decision making with description languages that are both precise and expressive, yet easy to grasp. This paper presents work in progress to develop a pattern language for describing goal refinement graphs. The language has a simple graphical notation, which is supported by a prototype editor tool, and a symbolic notation based on modal logic.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
Recent developments in the fields of veterinary epidemiology and economics are critically reviewed and assessed. The impacts of recent technological developments in diagnosis, genetic characterisation, data processing and statistical analysis are evaluated. It is concluded that the acquisition and availability of data remains the principal constraint to the application of available techniques in veterinary epidemiology and economics, especially at population level. As more commercial producers use computerised management systems, the availability of data for analysis within herds is improving. However, consistency of recording and diagnosis remains problematic. Recent trends to the development of national livestock databases intended to provide reassurance to consumers of the safety and traceability of livestock products are potentially valuable sources of data that could lead to much more effective application of veterinary epidemiology and economics. These opportunities will be greatly enhanced if data from different sources, such as movement recording, official animal health programmes, quality assurance schemes, production recording and breed societies can be integrated. However, in order to realise such integrated databases, it will be necessary to provide absolute control of user access to guarantee data security and confidentiality. The potential applications of integrated livestock databases in analysis, modelling, decision-support, and providing management information for veterinary services and livestock producers are discussed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society
Resumo:
Aim: To describe the geographical pattern of mean body size of the non-volant mammals of the Nearctic and Neotropics and evaluate the influence of five environmental variables that are likely to affect body size gradients. Location: The Western Hemisphere. Methods: We calculated mean body size (average log mass) values in 110 × 110 km cells covering the continental Nearctic and Neotropics. We also generated cell averages for mean annual temperature, range in elevation, their interaction, actual evapotranspiration, and the global vegetation index and its coefficient of variation. Associations between mean body size and environmental variables were tested with simple correlations and ordinary least squares multiple regression, complemented with spatial autocorrelation analyses and split-line regression. We evaluated the relative support for each multiple-regression model using AIC. Results: Mean body size increases to the north in the Nearctic and is negatively correlated with temperature. In contrast, across the Neotropics mammals are largest in the tropical and subtropical lowlands and smaller in the Andes, generating a positive correlation with temperature. Finally, body size and temperature are nonlinearly related in both regions, and split-line linear regression found temperature thresholds marking clear shifts in these relationships (Nearctic 10.9 °C; Neotropics 12.6 °C). The increase in body sizes with decreasing temperature is strongest in the northern Nearctic, whereas a decrease in body size in mountains dominates the body size gradients in the warmer parts of both regions. Main conclusions: We confirm previous work finding strong broad-scale Bergmann trends in cold macroclimates but not in warmer areas. For the latter regions (i.e. the southern Nearctic and the Neotropics), our analyses also suggest that both local and broad-scale patterns of mammal body size variation are influenced in part by the strong mesoscale climatic gradients existing in mountainous areas. A likely explanation is that reduced habitat sizes in mountains limit the presence of larger-sized mammals.
Resumo:
Clinical pathways have been adopted for various diseases in clinical departments for quality improvement as a result of standardization of medical activities in treatment process. Using knowledge-based decision support on the basis of clinical pathways is a promising strategy to improve medical quality effectively. However, the clinical pathway knowledge has not been fully integrated into treatment process and thus cannot provide comprehensive support to the actual work practice. Therefore this paper proposes a knowledgebased clinical pathway management method which contributes to make use of clinical knowledge to support and optimize medical practice. We have developed a knowledgebased clinical pathway management system to demonstrate how the clinical pathway knowledge comprehensively supports the treatment process. The experiences from the use of this system show that the treatment quality can be effectively improved by the extracted and classified clinical pathway knowledge, seamless integration of patient-specific clinical pathway recommendations with medical tasks and the evaluating pathway deviations for optimization.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on pa- rameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.
Resumo:
The idea of Sustainable Intensification comes as a response to the challenge of avoiding resources such as land, water and energy being overexploited while increasing food production for an increasing demand from a growing global population. Sustainable Intensification means that farmers need to simultaneously increase yields and sustainably use limited natural resources, such as water. Within the agricultural sector water has a number of uses including irrigation, spraying, drinking for livestock and washing (vegetables, livestock buildings). In order to achieve Sustainable Intensification measures are needed that enable policy makers and managers to inform them about the relative performance of farms as well as of possible ways to improve such performance. We provide a benchmarking tool to assess water use (relative) efficiency at a farm level, suggest pathways to improve farm level productivity by identifying best practices for reducing excessive use of water for irrigation. Data Envelopment Analysis techniques including analysis of returns to scale were used to evaluate any excess in agricultural water use of 66 Horticulture Farms based on different River Basin Catchments across England. We found that farms in the sample can reduce on average water requirements by 35% to achieve the same output (Gross Margin) when compared to their peers on the frontier. In addition, 47% of the farms operate under increasing returns to scale, indicating that farms will need to develop economies of scale to achieve input cost savings. Regarding the adoption of specific water use efficiency management practices, we found that the use of a decision support tool, recycling water and the installation of trickle/drip/spray lines irrigation system has a positive impact on water use efficiency at a farm level whereas the use of other irrigation systems such as the overhead irrigation system was found to have a negative effect on water use efficiency.